未验证 提交 3e13b9b3 编写于 作者: K Kristi 提交者: GitHub

update aws deployment for 2.6.0 (#7668)

### Motivation
I tried to follow the Aws deployment guide at https://pulsar.apache.org/docs/en/deploy-aws/ but found it was pretty outdated - It was trying to install pulsar 2.1.0-incubating. This PR updates it to install 2.6.0.

### Modifications

* Updated the pulsar version to 2.6.0
  * Fixed download location for 2.6.0
  * Updated config files for 2.6.0
  * Fixed connector installation for 2.6.0
  * Fixed Ansible's yum warning about installing multiple packages
上级 c54a47e2
......@@ -30,7 +30,7 @@
src: "{{ item.src }}"
fstype: xfs
opts: defaults,noatime,nodiscard
state: present
state: mounted
with_items:
- { path: "/mnt/journal", src: "/dev/nvme0n1" }
- { path: "/mnt/storage", src: "/dev/nvme1n1" }
......@@ -28,20 +28,21 @@
state: directory
with_items: ["/opt/pulsar"]
- name: Install RPM packages
yum: pkg={{ item }} state=latest
with_items:
- wget
- java
- sysstat
- vim
yum:
state: latest
name:
- wget
- java
- sysstat
- vim
- set_fact:
zookeeper_servers: "{{ groups['zookeeper']|map('extract', hostvars, ['ansible_default_ipv4', 'address'])|map('regex_replace', '(.*)', '\\1:2181') | join(',') }}"
service_url: "pulsar://{{ hostvars[groups['proxy'][0]].public_ip }}:6650/"
http_url: "http://{{ hostvars[groups['proxy'][0]].public_ip }}:8080/"
pulsar_version: "2.1.0-incubating"
zookeeper_servers: "{{ groups['zookeeper']|map('extract', hostvars, ['ansible_default_ipv4', 'address'])|map('regex_replace', '^(.*)$', '\\1:2181') | join(',') }}"
service_url: "{{ pulsar_service_url }}"
http_url: "{{ pulsar_web_url }}"
pulsar_version: "2.6.0"
- name: Download Pulsar binary package
unarchive:
src: http://archive.apache.org/dist/incubator/pulsar/pulsar-{{ pulsar_version }}/apache-pulsar-{{ pulsar_version }}-bin.tar.gz
src: https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-{{ pulsar_version }}/apache-pulsar-{{ pulsar_version }}-bin.tar.gz
remote_src: yes
dest: /opt/pulsar
extra_opts: ["--strip-components=1"]
......@@ -123,12 +124,45 @@
connection: ssh
become: true
tasks:
- name: Download Pulsar IO package
unarchive:
src: http://archive.apache.org/dist/incubator/pulsar/pulsar-{{ pulsar_version }}/apache-pulsar-io-connectors-{{ pulsar_version }}-bin.tar.gz
remote_src: yes
dest: /opt/pulsar
extra_opts: ["--strip-components=1"]
- name: Create connectors directory
file:
path: "/opt/pulsar/{{ item }}"
state: directory
loop:
- connectors
- name: Download Pulsar IO packages
get_url:
url: https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-{{ pulsar_version }}/connectors/pulsar-io-{{ item }}-{{ pulsar_version }}.nar
dest: /opt/pulsar/connectors/pulsar-io-{{ item }}-{{ pulsar_version }}.nar
loop:
# - aerospike
# - canal
# - cassandra
# - data-generator
# - debezium-mongodb
# - debezium-mysql
# - debezium-postgres
# - dynamodb
# - elastic-search
# - file
# - flume
# - hbase
# - hdfs2
# - hdfs3
# - influxdb
# - jdbc-clickhouse
# - jdbc-mariadb
# - jdbc-postgres
# - jdbc-sqlite
- kafka
# - kafka-connect-adaptor
# - kinesis
# - mongo
# - netty
# - rabbitmq
# - redis
# - solr
# - twitter
- name: Set up broker
template:
src: "../templates/broker.conf"
......
......@@ -52,8 +52,8 @@ minUsableSizeForIndexFileCreation=1073741824
# Configure a specific hostname or IP address that the bookie should use to advertise itself to
# clients. If not set, bookie will advertised its own IP address or hostname, depending on the
# listeningInterface and `seHostNameAsBookieID settings.
# advertisedAddress=
# listeningInterface and useHostNameAsBookieID settings.
advertisedAddress=
# Whether the bookie allowed to use a loopback interface as its primary
# interface(i.e. the interface it uses to establish its identity)?
......@@ -92,7 +92,7 @@ flushInterval=60000
# Whether the bookie should use its hostname to register with the
# co-ordination service(eg: Zookeeper service).
# When false, bookie will use its ipaddress for the registration.
# When false, bookie will use its ip address for the registration.
# Defaults to false.
useHostNameAsBookieID=false
......@@ -224,18 +224,18 @@ maxPendingAddRequestsPerThread=10000
auditorPeriodicBookieCheckInterval=86400
# The number of entries that a replication will rereplicate in parallel.
rereplicationEntryBatchSize=5000
rereplicationEntryBatchSize=100
# Auto-replication
# The grace period, in seconds, that the replication worker waits before fencing and
# replicating a ledger fragment that's still being written to upon bookie failure.
# openLedgerRereplicationGracePeriod=30
openLedgerRereplicationGracePeriod=30
# Whether the bookie itself can start auto-recovery service also or not
autoRecoveryDaemonEnabled=true
# How long to wait, in seconds, before starting auto recovery of a lost bookie
# lostBookieRecoveryDelay=0
lostBookieRecoveryDelay=0
#############################################################################
## Netty server settings
......@@ -268,28 +268,34 @@ serverTcpNoDelay=true
# The Recv ByteBuf allocator max buf size.
# byteBufAllocatorSizeMax=1048576
# The maximum netty frame size in bytes. Any message received larger than this will be rejected. The default value is 1G.
nettyMaxFrameSizeBytes=5253120
#############################################################################
## Journal settings
#############################################################################
# The journal format version to write.
# Available formats are 1-5:
# Available formats are 1-6:
# 1: no header
# 2: a header section was added
# 3: ledger key was introduced
# 4: fencing key was introduced
# 5: expanding header to 512 and padding writes to align sector size configured by `journalAlignmentSize`
# By default, it is `4`. If you'd like to enable `padding-writes` feature, you can set journal version to `5`.
# 6: persisting explicitLac is introduced
# By default, it is `6`.
# If you'd like to disable persisting ExplicitLac, you can set this config to < `6` and also
# fileInfoFormatVersionToWrite should be set to 0. If there is mismatch then the serverconfig is considered invalid.
# You can disable `padding-writes` by setting journal version back to `4`. This feature is available in 4.5.0
# and onward versions.
# journalFormatVersionToWrite=4
journalFormatVersionToWrite=5
# Max file size of journal file, in mega bytes
# A new journal file will be created when the old one reaches the file size limitation
journalMaxSizeMB=2048
# Max number of old journal file to kept
# Keep a number of old journal files would help data recovery in specia case
# Keep a number of old journal files would help data recovery in special case
journalMaxBackups=5
# How much space should we pre-allocate at a time in the journal.
......@@ -345,7 +351,7 @@ ledgerStorageClass=org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage
# For example:
# ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data
#
# Ideally ledger dirs and journal dir are each in a differet device,
# Ideally ledger dirs and journal dir are each in a different device,
# which reduce the contention between random i/o and sequential write.
# It is possible to run with a single disk, but performance will be significantly lower.
ledgerDirectories=data/bookkeeper/ledgers
......@@ -360,7 +366,7 @@ ledgerDirectories=data/bookkeeper/ledgers
auditorPeriodicCheckInterval=604800
# Whether sorted-ledger storage enabled (default true)
# sortedLedgerStorageEnabled=ture
# sortedLedgerStorageEnabled=true
# The skip list data size limitation (default 64MB) in EntryMemTable
# skipListSizeLimit=67108864L
......@@ -376,9 +382,19 @@ auditorPeriodicCheckInterval=604800
# to gain performance according your requirements.
openFileLimit=0
# The fileinfo format version to write.
# Available formats are 0-1:
# 0: Initial version
# 1: persisting explicitLac is introduced
# By default, it is `1`.
# If you'd like to disable persisting ExplicitLac, you can set this config to 0 and
# also journalFormatVersionToWrite should be set to < 6. If there is mismatch then the
# serverconfig is considered invalid.
fileInfoFormatVersionToWrite=0
# Size of a index page in ledger cache, in bytes
# A larger index page can improve performance writing page to disk,
# which is efficent when you have small number of ledgers and these
# which is efficient when you have small number of ledgers and these
# ledgers have similar number of entries.
# If you have large number of ledgers and each ledger has fewer entries,
# smaller index page would improve memory usage.
......@@ -391,7 +407,7 @@ openFileLimit=0
# pageLimit*pageSize should not more than JVM max memory limitation,
# otherwise you would got OutOfMemoryException.
# In general, incrementing pageLimit, using smaller index page would
# gain bettern performance in lager number of ledgers with fewer entries case
# gain better performance in lager number of ledgers with fewer entries case
# If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute
# the limitation of number of index pages.
pageLimit=0
......@@ -405,7 +421,7 @@ pageLimit=0
# and garbage collected. Try to read 'BookKeeper Internals' for detail info.
# ledgerManagerFactoryClass=org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory
# @Drepcated - `ledgerManagerType` is deprecated in favor of using `ledgerManagerFactoryClass`.
# @Deprecated - `ledgerManagerType` is deprecated in favor of using `ledgerManagerFactoryClass`.
# ledgerManagerType=hierarchical
# Root Zookeeper path to store ledger metadata
......@@ -429,7 +445,7 @@ entryLogFilePreallocationEnabled=true
# happens on log rotation.
# Flushing in smaller chunks but more frequently reduces spikes in disk
# I/O. Flushing too frequently may also affect performance negatively.
# flushEntrylogBytes=0
flushEntrylogBytes=268435456
# The number of bytes we should use as capacity for BufferedReadChannel. Default is 512 bytes.
readBufferSizeBytes=4096
......@@ -462,6 +478,7 @@ minorCompactionThreshold=0.2
# Interval to run minor compaction, in seconds
# If it is set to less than zero, the minor compaction is disabled.
# Note: should be greater than gcWaitTime.
minorCompactionInterval=3600
# Set the maximum number of entries which can be compacted without flushing.
......@@ -484,6 +501,7 @@ majorCompactionThreshold=0.5
# Interval to run major compaction, in seconds
# If it is set to less than zero, the major compaction is disabled.
# Note: should be greater than gcWaitTime.
majorCompactionInterval=86400
# Throttle compaction by bytes or by entries.
......@@ -521,7 +539,7 @@ readOnlyModeEnabled=true
# Whether the bookie is force started in read only mode or not
# forceReadOnlyBookie=false
# Persiste the bookie status locally on the disks. So the bookies can keep their status upon restarts
# Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts
# @Since 4.6
# persistBookieStatusEnabled=false
......@@ -531,7 +549,7 @@ readOnlyModeEnabled=true
# For each ledger dir, maximum disk space which can be used.
# Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will
# be written to that partition. If all ledger dir partions are full, then bookie
# be written to that partition. If all ledger dir partitions are full, then bookie
# will turn to readonly mode if 'readOnlyModeEnabled=true' is set, else it will
# shutdown.
# Valid values should be in between 0 and 1 (exclusive).
......@@ -590,6 +608,16 @@ zkEnableSecurity=false
## Server parameters
#############################################################################
# The flag enables/disables starting the admin http server. Default value is 'false'.
httpServerEnabled=false
# The http server port to listen on. Default value is 8080.
# Use `8000` as the port to keep it consistent with prometheus stats provider
httpServerPort=8000
# The http server class
httpServerClass=org.apache.bookkeeper.http.vertx.VertxHttpServer
# Configure a list of server components to enable and load on a bookie server.
# This provides the plugin run extra services along with a bookie server.
#
......@@ -605,12 +633,15 @@ zkEnableSecurity=false
# Size of Write Cache. Memory is allocated from JVM direct memory.
# Write cache is used to buffer entries before flushing into the entry log
# For good performance, it should be big enough to hold a sub
dbStorage_writeCacheMaxSizeMb=512
# For good performance, it should be big enough to hold a substantial amount
# of entries in the flush interval
# By default it will be allocated to 1/4th of the available direct memory
dbStorage_writeCacheMaxSizeMb=
# Size of Read cache. Memory is allocated from JVM direct memory.
# This read cache is pre-filled doing read-ahead whenever a cache miss happens
dbStorage_readAheadCacheMaxSizeMb=256
# By default it will be allocated to 1/4th of the available direct memory
dbStorage_readAheadCacheMaxSizeMb=
# How many entries to pre-fill in cache after a read cache miss
dbStorage_readAheadCacheBatchSize=1000
......@@ -622,8 +653,8 @@ dbStorage_readAheadCacheBatchSize=1000
# Size of RocksDB block-cache. For best performance, this cache
# should be big enough to hold a significant portion of the index
# database which can reach ~2GB in some cases
# Default is 256 MBytes
dbStorage_rocksDB_blockCacheSize=268435456
# Default is to use 10% of the direct memory size
dbStorage_rocksDB_blockCacheSize=
# Other RocksDB specific tunables
dbStorage_rocksDB_writeBufferSizeMB=64
......
......@@ -43,6 +43,27 @@ bindAddress=0.0.0.0
# Hostname or IP address the service advertises to the outside world. If not set, the value of InetAddress.getLocalHost().getHostName() is used.
advertisedAddress={{ hostvars[inventory_hostname].private_ip }}
# Used to specify multiple advertised listeners for the broker.
# The value must format as <listener_name>:pulsar://<host>:<port>,
# multiple listeners should separate with commas.
# Do not use this configuration with advertisedAddress and brokerServicePort.
# The Default value is absent means use advertisedAddress and brokerServicePort.
# advertisedListeners=
# Used to specify the internal listener name for the broker.
# The listener name must contain in the advertisedListeners.
# The Default value is absent, the broker uses the first listener as the internal listener.
# internalListenerName=
# Number of threads to use for Netty IO. Default is set to 2 * Runtime.getRuntime().availableProcessors()
numIOThreads=
# Number of threads to use for HTTP requests processing. Default is set to 2 * Runtime.getRuntime().availableProcessors()
numHttpServerThreads=
# Flag to control features that are meant to be used when running in standalone mode
isRunningStandalone=
# Name of the cluster to which this broker belongs to
clusterName={{ cluster_name }}
......@@ -52,6 +73,12 @@ failureDomainsEnabled=false
# Zookeeper session timeout in milliseconds
zooKeeperSessionTimeoutMillis=30000
# ZooKeeper operation timeout in seconds
zooKeeperOperationTimeoutSeconds=30
# ZooKeeper cache expiry time in seconds
zooKeeperCacheExpirySeconds=300
# Time to wait for broker graceful shutdown. After this time elapses, the process will be killed
brokerShutdownTimeoutMs=60000
......@@ -64,8 +91,29 @@ backlogQuotaCheckEnabled=true
# How often to check for topics that have reached the quota
backlogQuotaCheckIntervalInSeconds=60
# Default per-topic backlog quota limit
backlogQuotaDefaultLimitGB=10
# Default per-topic backlog quota limit, less than 0 means no limitation. default is -1.
backlogQuotaDefaultLimitGB=-1
# Default backlog quota retention policy. Default is producer_request_hold
# 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out)
# 'producer_exception' Policy which throws javax.jms.ResourceAllocationException to the producer
# 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog
backlogQuotaDefaultRetentionPolicy=producer_request_hold
# Default ttl for namespaces if ttl is not already configured at namespace policies. (disable default-ttl with value 0)
ttlDurationDefaultInSeconds=0
# Enable topic auto creation if new producer or consumer connected (disable auto creation with value false)
allowAutoTopicCreation=true
# The type of topic that is allowed to be automatically created.(partitioned/non-partitioned)
allowAutoTopicCreationType=non-partitioned
# Enable subscription auto creation if new consumer connected (disable auto creation with value false)
allowAutoSubscriptionCreation=true
# The number of partitioned topics that is allowed to be automatically created if allowAutoTopicCreationType is partitioned.
defaultNumPartitions=1
# Enable the deletion of inactive topics
brokerDeleteInactiveTopicsEnabled=true
......@@ -73,6 +121,21 @@ brokerDeleteInactiveTopicsEnabled=true
# How often to check for inactive topics
brokerDeleteInactiveTopicsFrequencySeconds=60
# Set the inactive topic delete mode. Default is delete_when_no_subscriptions
# 'delete_when_no_subscriptions' mode only delete the topic which has no subscriptions and no active producers
# 'delete_when_subscriptions_caught_up' mode only delete the topic that all subscriptions has no backlogs(caught up)
# and no active producers/consumers
brokerDeleteInactiveTopicsMode=delete_when_no_subscriptions
# Max duration of topic inactivity in seconds, default is not present
# If not present, 'brokerDeleteInactiveTopicsFrequencySeconds' will be used
# Topics that are inactive for longer than this value will be deleted
brokerDeleteInactiveTopicsMaxInactiveDurationSeconds=
# Max pending publish requests per connection to avoid keeping large number of pending
# requests in memory. Default: 1000
maxPendingPublishdRequestsPerConnection=1000
# How frequently to proactively check and purge expired messages
messageExpiryCheckIntervalInMinutes=5
......@@ -83,9 +146,23 @@ activeConsumerFailoverDelayTimeMillis=1000
# When it is 0, inactive subscriptions are not deleted automatically
subscriptionExpirationTimeMinutes=0
# Enable subscription message redelivery tracker to send redelivery count to consumer (default is enabled)
subscriptionRedeliveryTrackerEnabled=true
# How frequently to proactively check and purge expired subscription
subscriptionExpiryCheckIntervalInMinutes=5
# Enable Key_Shared subscription (default is enabled)
subscriptionKeySharedEnable=true
# On KeyShared subscriptions, with default AUTO_SPLIT mode, use splitting ranges or
# consistent hashing to reassign keys to new consumers
subscriptionKeySharedUseConsistentHashing=false
# On KeyShared subscriptions, number of points in the consistent-hashing ring.
# The higher the number, the more equal the assignment of keys to consumers
subscriptionKeySharedConsistentHashingReplicaPoints=100
# Set the default behavior for message deduplication in the broker
# This can be overridden per-namespace. If enabled, broker will reject
# messages that were already stored in the topic
......@@ -142,6 +219,33 @@ maxUnackedMessagesPerBroker=0
# limit/2 messages
maxUnackedMessagesPerSubscriptionOnBrokerBlocked=0.16
# Tick time to schedule task that checks topic publish rate limiting across all topics
# Reducing to lower value can give more accuracy while throttling publish but
# it uses more CPU to perform frequent check. (Disable publish throttling with value 0)
topicPublisherThrottlingTickTimeMillis=10
# Tick time to schedule task that checks broker publish rate limiting across all topics
# Reducing to lower value can give more accuracy while throttling publish but
# it uses more CPU to perform frequent check. (Disable publish throttling with value 0)
brokerPublisherThrottlingTickTimeMillis=50
# Max Rate(in 1 seconds) of Message allowed to publish for a broker if broker publish rate limiting enabled
# (Disable message rate limit with value 0)
brokerPublisherThrottlingMaxMessageRate=0
# Max Rate(in 1 seconds) of Byte allowed to publish for a broker if broker publish rate limiting enabled.
# (Disable byte rate limit with value 0)
brokerPublisherThrottlingMaxByteRate=0
# Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies,
# hence causing high network bandwidth usage
# When the positive value is set, broker will throttle the subscribe requests for one consumer.
# Otherwise, the throttling will be disabled. The default value of this setting is 0 - throttling is disabled.
subscribeThrottlingRatePerConsumer=0
# Rate period for {subscribeThrottlingRatePerConsumer}. Default is 30s.
subscribeRatePeriodPerConsumerInSecond=30
# Default messages per second dispatch throttling-limit for every topic. Using a value of 0, is disabling default
# message dispatch-throttling
dispatchThrottlingRatePerTopicInMsg=0
......@@ -150,10 +254,48 @@ dispatchThrottlingRatePerTopicInMsg=0
# default message-byte dispatch-throttling
dispatchThrottlingRatePerTopicInByte=0
# Default number of message dispatching throttling-limit for a subscription.
# Using a value of 0, is disabling default message dispatch-throttling.
dispatchThrottlingRatePerSubscriptionInMsg=0
# Default number of message-bytes dispatching throttling-limit for a subscription.
# Using a value of 0, is disabling default message-byte dispatch-throttling.
dispatchThrottlingRatePerSubscriptionInByte=0
# Default messages per second dispatch throttling-limit for every replicator in replication.
# Using a value of 0, is disabling replication message dispatch-throttling
dispatchThrottlingRatePerReplicatorInMsg=0
# Default bytes per second dispatch throttling-limit for every replicator in replication.
# Using a value of 0, is disabling replication message-byte dispatch-throttling
dispatchThrottlingRatePerReplicatorInByte=0
# Dispatch rate-limiting relative to publish rate.
# (Enabling flag will make broker to dynamically update dispatch-rate relatively to publish-rate:
# throttle-dispatch-rate = (publish-rate + configured dispatch-rate).
dispatchThrottlingRateRelativeToPublishRate=false
# By default we enable dispatch-throttling for both caught up consumers as well as consumers who have
# backlog.
dispatchThrottlingOnNonBacklogConsumerEnabled=true
# Max number of entries to read from bookkeeper. By default it is 100 entries.
dispatcherMaxReadBatchSize=100
# Max size in bytes of entries to read from bookkeeper. By default it is 5MB.
dispatcherMaxReadSizeBytes=5242880
# Min number of entries to read from bookkeeper. By default it is 1 entries.
# When there is an error occurred on reading entries from bookkeeper, the broker
# will backoff the batch size to this minimum number."
dispatcherMinReadBatchSize=1
# Max number of entries to dispatch for a shared subscription. By default it is 20 entries.
dispatcherMaxRoundRobinBatchSize=20
# Precise dispathcer flow control according to history message number of each entry
preciseDispatcherFlowControl=false
# Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic
maxConcurrentLookupRequest=50000
......@@ -193,6 +335,70 @@ maxConsumersPerTopic=0
# Using a value of 0, is disabling maxConsumersPerSubscription-limit check.
maxConsumersPerSubscription=0
# Max size of messages.
maxMessageSize=5242880
# Interval between checks to see if topics with compaction policies need to be compacted
brokerServiceCompactionMonitorIntervalInSeconds=60
# Whether to enable the delayed delivery for messages.
# If disabled, messages will be immediately delivered and there will
# be no tracking overhead.
delayedDeliveryEnabled=true
# Control the tick time for when retrying on delayed delivery,
# affecting the accuracy of the delivery time compared to the scheduled time.
# Default is 1 second.
delayedDeliveryTickTimeMillis=1000
# Whether to enable acknowledge of batch local index.
acknowledgmentAtBatchIndexLevelEnabled=false
# Enable tracking of replicated subscriptions state across clusters.
enableReplicatedSubscriptions=true
# Frequency of snapshots for replicated subscriptions tracking.
replicatedSubscriptionsSnapshotFrequencyMillis=1000
# Timeout for building a consistent snapshot for tracking replicated subscriptions state.
replicatedSubscriptionsSnapshotTimeoutSeconds=30
# Max number of snapshot to be cached per subscription.
replicatedSubscriptionsSnapshotMaxCachedPerSubscription=10
# Max memory size for broker handling messages sending from producers.
# If the processing message size exceed this value, broker will stop read data
# from the connection. The processing messages means messages are sends to broker
# but broker have not send response to client, usually waiting to write to bookies.
# It's shared across all the topics running in the same broker.
# Use -1 to disable the memory limitation. Default is 1/2 of direct memory.
maxMessagePublishBufferSizeInMB=
# Interval between checks to see if message publish buffer size is exceed the max message publish buffer size
# Use 0 or negative number to disable the max publish buffer limiting.
messagePublishBufferCheckIntervalInMillis=100
# Check between intervals to see if consumed ledgers need to be trimmed
# Use 0 or negative number to disable the check
retentionCheckIntervalInSeconds=120
# Max number of partitions per partitioned topic
# Use 0 or negative number to disable the check
maxNumPartitionsPerPartitionedTopic=0
# There are two policies when zookeeper session expired happens, "shutdown" and "reconnect".
# If uses "shutdown" policy, shutdown the broker when zookeeper session expired happens.
# If uses "reconnect" policy, try to reconnect to zookeeper server and re-register metadata to zookeeper.
# Node: the "reconnect" policy is an experiment feature
zookeeperSessionExpiredPolicy=shutdown
# Enable or disable system topic
systemTopicEnabled=false
# Enable or disable topic level policies, topic level policies depends on the system topic
# Please enable the system topic first.
topicLevelPoliciesEnabled=false
### --- Authentication --- ###
# Role names that are treated as "proxy roles". If the broker sees a request with
#role as proxyRoles - it will demand to see a valid original principal.
......@@ -202,24 +408,102 @@ proxyRoles=
# else it just accepts the originalPrincipal and authorizes it (if required).
authenticateOriginalAuthData=false
# Enable TLS
# Deprecated - Use webServicePortTls and brokerServicePortTls instead
tlsEnabled=false
# Tls cert refresh duration in seconds (set 0 to check on every new connection)
tlsCertRefreshCheckDurationSec=300
# Path for the TLS certificate file
tlsCertificateFilePath=
# Path for the TLS private key file
tlsKeyFilePath=
# Path for the trusted TLS certificate file
# Path for the trusted TLS certificate file.
# This cert is used to verify that any certs presented by connecting clients
# are signed by a certificate authority. If this verification
# fails, then the certs are untrusted and the connections are dropped.
tlsTrustCertsFilePath=
# Accept untrusted TLS certificate from client
# Accept untrusted TLS certificate from client.
# If true, a client with a cert which cannot be verified with the
# 'tlsTrustCertsFilePath' cert will allowed to connect to the server,
# though the cert will not be used for client authentication.
tlsAllowInsecureConnection=false
# Specify whether Client certificates are required for TLS
# Specify the tls protocols the broker will use to negotiate during TLS handshake
# (a comma-separated list of protocol names).
# Examples:- [TLSv1.2, TLSv1.1, TLSv1]
tlsProtocols=
# Specify the tls cipher the broker will use to negotiate during TLS Handshake
# (a comma-separated list of ciphers).
# Examples:- [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
tlsCiphers=
# Trusted client certificates are required for to connect TLS
# Reject the Connection if the Client Certificate is not trusted.
# In effect, this requires that all connecting clients perform TLS client
# authentication.
tlsRequireTrustedClientCertOnConnect=false
### --- KeyStore TLS config variables --- ###
# Enable TLS with KeyStore type configuration in broker.
tlsEnabledWithKeyStore=false
# TLS Provider for KeyStore type
tlsProvider=
# TLS KeyStore type configuration in broker: JKS, PKCS12
tlsKeyStoreType=JKS
# TLS KeyStore path in broker
tlsKeyStore=
# TLS KeyStore password for broker
tlsKeyStorePassword=
# TLS TrustStore type configuration in broker: JKS, PKCS12
tlsTrustStoreType=JKS
# TLS TrustStore path in broker
tlsTrustStore=
# TLS TrustStore password in broker
tlsTrustStorePassword=
# Whether internal client use KeyStore type to authenticate with Pulsar brokers
brokerClientTlsEnabledWithKeyStore=false
# The TLS Provider used by internal client to authenticate with other Pulsar brokers
brokerClientSslProvider=
# TLS TrustStore type configuration for internal client: JKS, PKCS12
# used by the internal client to authenticate with Pulsar brokers
brokerClientTlsTrustStoreType=JKS
# TLS TrustStore path for internal client
# used by the internal client to authenticate with Pulsar brokers
brokerClientTlsTrustStore=
# TLS TrustStore password for internal client,
# used by the internal client to authenticate with Pulsar brokers
brokerClientTlsTrustStorePassword=
# Specify the tls cipher the internal client will use to negotiate during TLS Handshake
# (a comma-separated list of ciphers)
# e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256].
# used by the internal client to authenticate with Pulsar brokers
brokerClientTlsCiphers=
# Specify the tls protocols the broker will use to negotiate during TLS handshake
# (a comma-separated list of protocol names).
# e.g. [TLSv1.2, TLSv1.1, TLSv1]
# used by the internal client to authenticate with Pulsar brokers
brokerClientTlsProtocols=
### --- Authentication --- ###
# Enable authentication
......@@ -228,6 +512,9 @@ authenticationEnabled=false
# Autentication provider name list, which is comma separated list of class names
authenticationProviders=
# Interval of time for checking for expired authentication credentials
authenticationRefreshCheckSeconds=60
# Enforce authorization
authorizationEnabled=false
......@@ -245,6 +532,7 @@ superUserRoles=
# Authentication settings of the broker itself. Used when the broker connects to other brokers,
# either in same or other clusters
brokerClientTlsEnabled=false
brokerClientAuthenticationPlugin=
brokerClientAuthenticationParameters=
brokerClientTrustCertsFilePath=
......@@ -255,8 +543,58 @@ athenzDomainNames=
# When this parameter is not empty, unauthenticated users perform as anonymousUserRole
anonymousUserRole=
### --- Token Authentication Provider --- ###
## Symmetric key
# Configure the secret key to be used to validate auth tokens
# The key can be specified like:
# tokenSecretKey=data:;base64,xxxxxxxxx
# tokenSecretKey=file:///my/secret.key
tokenSecretKey=
## Asymmetric public/private key pair
# Configure the public key to be used to validate auth tokens
# The key can be specified like:
# tokenPublicKey=data:;base64,xxxxxxxxx
# tokenPublicKey=file:///my/public.key
tokenPublicKey=
# The token "claim" that will be interpreted as the authentication "role" or "principal" by AuthenticationProviderToken (defaults to "sub" if blank)
tokenAuthClaim=
# The token audience "claim" name, e.g. "aud", that will be used to get the audience from token.
# If not set, audience will not be verified.
tokenAudienceClaim=
# The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this.
tokenAudience=
### --- SASL Authentication Provider --- ###
# This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL.
# Default value: `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*",
# so only clients whose id contains 'pulsar' are allowed to connect.
saslJaasClientAllowedIds=
# Service Principal, for login context name.
# Default value `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker".
saslJaasBrokerSectionName=
### --- HTTP Server config --- ###
# If >0, it will reject all HTTP requests with bodies larged than the configured limit
httpMaxRequestSize=-1
### --- BookKeeper Client --- ###
# Metadata service uri that bookkeeper is used for loading corresponding metadata driver
# and resolving its metadata service location.
# This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster.
# For example: zk+hierarchical://localhost:2181/ledgers
# The metadata service uri list can also be semicolon separated values like below:
# zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers
bookkeeperMetadataServiceUri=
# Authentication plugin to use when connecting to bookies
bookkeeperClientAuthenticationPlugin=
......@@ -271,6 +609,9 @@ bookkeeperClientTimeoutInSeconds=30
# Using a value of 0, is disabling the speculative reads
bookkeeperClientSpeculativeReadTimeoutInMillis=0
# Use older Bookkeeper wire protocol with bookie
bookkeeperUseV2WireProtocol=true
# Enable bookies health check. Bookies that have more than the configured number of failure within
# the interval will be quarantined for some time. During this period, new ledgers won't be created
# on these bookies
......@@ -279,6 +620,11 @@ bookkeeperClientHealthCheckIntervalSeconds=60
bookkeeperClientHealthCheckErrorThresholdPerInterval=5
bookkeeperClientHealthCheckQuarantineTimeInSeconds=1800
# Specify options for the GetBookieInfo check. These settings can be useful
# to help ensure the list of bookies is up to date on the brokers.
bookkeeperGetBookieInfoIntervalSeconds=86400
bookkeeperGetBookieInfoRetryIntervalSeconds=60
# Enable rack-aware bookie selection policy. BK will chose bookies from different racks when
# forming a new bookie ensemble
bookkeeperClientRackawarePolicyEnabled=true
......@@ -295,6 +641,59 @@ bookkeeperClientReorderReadSequenceEnabled=false
# outside the specified groups will not be used by the broker
bookkeeperClientIsolationGroups=
# Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't
# have enough bookie available.
bookkeeperClientSecondaryIsolationGroups=
# Minimum bookies that should be available as part of bookkeeperClientIsolationGroups
# else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list.
bookkeeperClientMinAvailableBookiesInIsolationGroups=
# Enable/disable having read operations for a ledger to be sticky to a single bookie.
# If this flag is enabled, the client will use one single bookie (by preference) to read
# all entries for a ledger.
#
# Disable Sticy Read until {@link https://github.com/apache/bookkeeper/issues/1970} is fixed
bookkeeperEnableStickyReads=false
# Set the client security provider factory class name.
# Default: org.apache.bookkeeper.tls.TLSContextFactory
bookkeeperTLSProviderFactoryClass=org.apache.bookkeeper.tls.TLSContextFactory
# Enable tls authentication with bookie
bookkeeperTLSClientAuthentication=false
# Supported type: PEM, JKS, PKCS12. Default value: PEM
bookkeeperTLSKeyFileType=PEM
#Supported type: PEM, JKS, PKCS12. Default value: PEM
bookkeeperTLSTrustCertTypes=PEM
# Path to file containing keystore password, if the client keystore is password protected.
bookkeeperTLSKeyStorePasswordPath=
# Path to file containing truststore password, if the client truststore is password protected.
bookkeeperTLSTrustStorePasswordPath=
# Path for the TLS private key file
bookkeeperTLSKeyFilePath=
# Path for the TLS certificate file
bookkeeperTLSCertificateFilePath=
# Path for the trusted TLS certificate file
bookkeeperTLSTrustCertsFilePath=
# Enable/disable disk weight based placement. Default is false
bookkeeperDiskWeightBasedPlacementEnabled=false
# Set the interval to check the need for sending an explicit LAC
# A value of '0' disables sending any explicit LACs. Default is 0.
bookkeeperExplicitLacIntervalInMills=0
# Expose bookkeeper client managed ledger stats to prometheus. default is false
# bookkeeperClientExposeStatsToPrometheus=false
### --- Managed Ledger --- ###
# Number of bookies to use when creating a ledger
......@@ -306,18 +705,37 @@ managedLedgerDefaultWriteQuorum=2
# Number of guaranteed copies (acks to wait before write is complete)
managedLedgerDefaultAckQuorum=2
# Default type of checksum to use when writing to BookKeeper. Default is "CRC32"
# Other possible options are "CRC32C" (which is faster), "MAC" or "DUMMY" (no checksum).
managedLedgerDigestType=CRC32
# Default type of checksum to use when writing to BookKeeper. Default is "CRC32C"
# Other possible options are "CRC32", "MAC" or "DUMMY" (no checksum).
managedLedgerDigestType=CRC32C
# Number of threads to be used for managed ledger tasks dispatching
managedLedgerNumWorkerThreads=8
# Number of threads to be used for managed ledger scheduled tasks
managedLedgerNumSchedulerThreads=8
# Amount of memory to use for caching data payload in managed ledger. This memory
# is allocated from JVM direct memory and it's shared across all the topics
# running in the same broker
managedLedgerCacheSizeMB=1024
# running in the same broker. By default, uses 1/5th of available direct memory
managedLedgerCacheSizeMB=
# Whether we should make a copy of the entry payloads when inserting in cache
managedLedgerCacheCopyEntries=false
# Threshold to which bring down the cache level when eviction is triggered
managedLedgerCacheEvictionWatermark=0.9
# Configure the cache eviction frequency for the managed ledger cache (evictions/sec)
managedLedgerCacheEvictionFrequency=100.0
# All entries that have stayed in cache for more than the configured time, will be evicted
managedLedgerCacheEvictionTimeThresholdMillis=1000
# Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged'
# and thus should be set as inactive.
managedLedgerCursorBackloggedThreshold=1000
# Rate limit the amount of writes per second generated by consumer acking the messages
managedLedgerDefaultMarkDeleteRateLimit=1.0
......@@ -334,10 +752,17 @@ managedLedgerMinLedgerRolloverTimeMinutes=10
# Maximum time before forcing a ledger rollover for a topic
managedLedgerMaxLedgerRolloverTimeMinutes=240
# Maximum ledger size before triggering a rollover for a topic (MB)
managedLedgerMaxSizePerLedgerMbytes=2048
# Delay between a ledger being successfully offloaded to long term storage
# and the ledger being deleted from bookkeeper (default is 4 hours)
managedLedgerOffloadDeletionLagMs=14400000
# The number of bytes before triggering automatic offload to long term storage
# (default is -1, which is disabled)
managedLedgerOffloadAutoTriggerSizeThresholdBytes=-1
# Max number of entries to append to a cursor ledger
managedLedgerCursorMaxEntriesPerLedger=50000
......@@ -362,6 +787,27 @@ managedLedgerMaxUnackedRangesToPersistInZooKeeper=1000
# corrupted at bookkeeper and managed-cursor is stuck at that ledger.
autoSkipNonRecoverableData=false
# operation timeout while updating managed-ledger metadata.
managedLedgerMetadataOperationsTimeoutSeconds=60
# Read entries timeout when broker tries to read messages from bookkeeper.
managedLedgerReadEntryTimeoutSeconds=0
# Add entry timeout when broker tries to publish message to bookkeeper (0 to disable it).
managedLedgerAddEntryTimeoutSeconds=0
# Managed ledger prometheus stats latency rollover seconds (default: 60s)
managedLedgerPrometheusStatsLatencyRolloverSeconds=60
# Whether trace managed ledger task execution time
managedLedgerTraceTaskExecution=true
# New entries check delay for the cursor under the managed ledger.
# If no new messages in the topic, the cursor will try to check again after the delay time.
# For consumption latency sensitive scenario, can set to a smaller value or set to 0.
# Of course, use a smaller value may degrade consumption throughput. Default is 10ms.
managedLedgerNewEntriesCheckDelayInMillis=10
### --- Load balancer --- ###
# Enable load balancer
......@@ -428,6 +874,51 @@ loadBalancerOverrideBrokerNicSpeedGbps=
# Name of load manager to use
loadManagerClassName=org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
# Supported algorithms name for namespace bundle split.
# "range_equally_divide" divides the bundle into two parts with the same hash range size.
# "topic_count_equally_divide" divides the bundle into two parts with the same topics count.
supportedNamespaceBundleSplitAlgorithms=range_equally_divide,topic_count_equally_divide
# Default algorithm name for namespace bundle split
defaultNamespaceBundleSplitAlgorithm=range_equally_divide
# load shedding strategy, support OverloadShedder and ThresholdShedder, default is OverloadShedder
loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder
# The broker resource usage threshold.
# When the broker resource usage is gratter than the pulsar cluster average resource usge,
# the threshold shedder will be triggered to offload bundles from the broker.
# It only take effect in ThresholdSheddler strategy.
loadBalancerBrokerThresholdShedderPercentage=10
# When calculating new resource usage, the history usage accounts for.
# It only take effect in ThresholdSheddler strategy.
loadBalancerHistoryResourcePercentage=0.9
# The BandWithIn usage weight when calculating new resourde usage.
# It only take effect in ThresholdShedder strategy.
loadBalancerBandwithInResourceWeight=1.0
# The BandWithOut usage weight when calculating new resourde usage.
# It only take effect in ThresholdShedder strategy.
loadBalancerBandwithOutResourceWeight=1.0
# The CPU usage weight when calculating new resourde usage.
# It only take effect in ThresholdShedder strategy.
loadBalancerCPUResourceWeight=1.0
# The heap memory usage weight when calculating new resourde usage.
# It only take effect in ThresholdShedder strategy.
loadBalancerMemoryResourceWeight=1.0
# The direct memory usage weight when calculating new resourde usage.
# It only take effect in ThresholdShedder strategy.
loadBalancerDirectMemoryResourceWeight=1.0
# Bundle unload minimum throughput threshold (MB), avoding bundle unload frequently.
# It only take effect in ThresholdShedder strategy.
loadBalancerBundleUnloadMinThroughputThreshold=10
### --- Replication --- ###
# Enable replication metrics
......@@ -444,8 +935,9 @@ replicationProducerQueueSize=1000
# Replicator prefix used for replicator producer name and cursor name
replicatorPrefix=pulsar.repl
# Enable TLS when talking with other clusters to replicate messages
replicationTlsEnabled=false
# Duration to check replication policy to avoid replicator inconsistency
# due to missing ZooKeeper watch (disable with value 0)
replicatioPolicyCheckDurationSeconds=600
# Default message retention time
defaultRetentionTimeInMinutes=0
......@@ -456,6 +948,9 @@ defaultRetentionSizeInMB=0
# How often to check whether the connections are still alive
keepAliveIntervalSeconds=30
# bootstrap namespaces
bootstrapNamespaces=
### --- WebSocket --- ###
# Enable the WebSocket API service in broker
......@@ -467,6 +962,9 @@ webSocketNumIoThreads=8
# Number of connections per Broker in Pulsar Client used in WebSocket proxy
webSocketConnectionsPerBroker=8
# Time in milliseconds that idle WebSocket session times out
webSocketSessionIdleTimeoutMillis=300000
# The maximum size of a text message during parsing in WebSocket proxy
webSocketMaxTextFrameSize=1048576
......@@ -475,28 +973,59 @@ webSocketMaxTextFrameSize=1048576
# Enable topic level metrics
exposeTopicLevelMetricsInPrometheus=true
# Enable consumer level metrics. default is false
exposeConsumerLevelMetricsInPrometheus=false
# Classname of Pluggable JVM GC metrics logger that can log GC specific metrics
# jvmGCMetricsLoggerClassName=
### --- Functions --- ###
# Enable Functions Worker Service in Broker
functionsWorkerEnabled=true
functionsWorkerEnabled=false
### --- Broker Web Stats --- ###
# Enable topic level metrics
exposePublisherStats=true
statsUpdateFrequencyInSecs=60
statsUpdateInitialDelayInSecs=60
# Enable expose the precise backlog stats.
# Set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate.
# Default is false.
exposePreciseBacklogInPrometheus=false
### --- Schema storage --- ###
# The schema storage implementation used by this broker
schemaRegistryStorageClassName=org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory
# Enforce schema validation on following cases:
#
# - if a producer without a schema attempts to produce to a topic with schema, the producer will be
# failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema.
# if you enable this setting, it will cause non-java clients failed to produce.
isSchemaValidationEnforced=false
### --- Ledger Offloading --- ###
# Driver to use to offload old data to long term storage (Possible values: S3)
# The directory for all the offloader implementations
offloadersDirectory=./offloaders
# Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage)
# When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for
# the project (check from Developers Console -> Api&auth -> APIs).
managedLedgerOffloadDriver=
# Maximum number of thread pool threads for ledger offloading
managedLedgerOffloadMaxThreads=2
# Maximum prefetch rounds for ledger reading for offloading
managedLedgerOffloadPrefetchRounds=1
# Use Open Range-Set to cache unacked messages
managedLedgerUnackedRangesOpenCacheSetEnabled=true
# For Amazon S3 ledger offload, AWS region
s3ManagedLedgerOffloadRegion=
......@@ -512,10 +1041,42 @@ s3ManagedLedgerOffloadMaxBlockSizeInBytes=67108864
# For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default)
s3ManagedLedgerOffloadReadBufferSizeInBytes=1048576
# For Google Cloud Storage ledger offload, region where offload bucket is located.
# reference this page for more details: https://cloud.google.com/storage/docs/bucket-locations
gcsManagedLedgerOffloadRegion=
# For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into
gcsManagedLedgerOffloadBucket=
# For Google Cloud Storage ledger offload, Max block size in bytes. (64MB by default, 5MB minimum)
gcsManagedLedgerOffloadMaxBlockSizeInBytes=67108864
# For Google Cloud Storage ledger offload, Read buffer size in bytes (1MB by default)
gcsManagedLedgerOffloadReadBufferSizeInBytes=1048576
# For Google Cloud Storage, path to json file containing service account credentials.
# For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849
gcsManagedLedgerOffloadServiceAccountKeyFile=
#For File System Storage, file system profile path
fileSystemProfilePath=../conf/filesystem_offload_core_site.xml
#For File System Storage, file system uri
fileSystemURI=
### --- Deprecated config variables --- ###
# Deprecated. Use configurationStoreServers
globalZookeeperServers={{ zookeeper_servers }}
# Deprecated - Enable TLS when talking with other clusters to replicate messages
replicationTlsEnabled=false
# Deprecated. Use brokerDeleteInactiveTopicsFrequencySeconds
brokerServicePurgeInactiveFrequencyInSeconds=60
\ No newline at end of file
brokerServicePurgeInactiveFrequencyInSeconds=60
### --- Transaction config variables --- ###
# Enable transaction coordinator in broker
transactionCoordinatorEnabled=true
transactionMetadataStoreProviderClassName=org.apache.pulsar.transaction.coordinator.impl.InMemTransactionMetadataStoreProvider
......@@ -17,12 +17,53 @@
# under the License.
#
# Pulsar Client and pulsar-admin configuration
webServiceUrl=http://{{ hostvars[groups['pulsar'][0]].private_ip }}:8080/
brokerServiceUrl=pulsar://{{ hostvars[groups['pulsar'][0]].private_ip }}:6650/
#authPlugin=
#authParams=
#useTls=
# Configuration for pulsar-client and pulsar-admin CLI tools
# URL for Pulsar REST API (for admin operations)
# For TLS:
# webServiceUrl=https://localhost:8443/
webServiceUrl={{ http_url }}
# URL for Pulsar Binary Protocol (for produce and consume operations)
# For TLS:
# brokerServiceUrl=pulsar+ssl://localhost:6651/
brokerServiceUrl={{ service_url }}
# Authentication plugin to authenticate with servers
# e.g. for TLS
# authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls
authPlugin=
# Parameters passed to authentication plugin.
# A comma separated list of key:value pairs.
# Keys depend on the configured authPlugin.
# e.g. for TLS
# authParams=tlsCertFile:/path/to/client-cert.pem,tlsKeyFile:/path/to/client-key.pem
authParams=
# Allow TLS connections to servers whose certificate cannot be
# be verified to have been signed by a trusted certificate
# authority.
tlsAllowInsecureConnection=false
# Whether server hostname must match the common name of the certificate
# the server is using.
tlsEnableHostnameVerification=false
#tlsTrustCertsFilePath
# Path for the trusted TLS certificate file.
# This cert is used to verify that any cert presented by a server
# is signed by a certificate authority. If this verification
# fails, then the cert is untrusted and the connection is dropped.
tlsTrustCertsFilePath=
# Enable TLS with KeyStore type configuration in broker.
useKeyStoreTls=false
# TLS KeyStore type configuration: JKS, PKCS12
tlsTrustStoreType=JKS
# TLS TrustStore path
tlsTrustStorePath=
# TLS TrustStore password
tlsTrustStorePassword=
......@@ -17,15 +17,39 @@
# under the License.
#
### --- Broker Discovery --- ###
# The ZooKeeper quorum connection string (as a comma-separated list)
zookeeperServers={{ zookeeper_servers }}
# Configuration store connection string (as a comma-separated list)
configurationStoreServers={{ zookeeper_servers }}
# if Service Discovery is Disabled this url should point to the discovery service provider.
brokerServiceURL=
brokerServiceURLTLS=
# These settings are unnecessary if `zookeeperServers` is specified
brokerWebServiceURL=
brokerWebServiceURLTLS=
# If function workers are setup in a separate cluster, configure the following 2 settings
# to point to the function workers cluster
functionWorkerWebServiceURL=
functionWorkerWebServiceURLTLS=
# ZooKeeper session timeout (in milliseconds)
zookeeperSessionTimeoutMs=30000
# ZooKeeper cache expiry time in seconds
zooKeeperCacheExpirySeconds=300
### --- Server --- ###
# Hostname or IP address the service advertises to the outside world.
# If not set, the value of `InetAddress.getLocalHost().getHostname()` is used.
advertisedAddress=
# The port to use for server binary Protobuf requests
servicePort=6650
......@@ -42,6 +66,28 @@ webServicePortTls=8443
# to service discovery health checks
statusFilePath=
# Proxy log level, default is 0.
# 0: Do not log any tcp channel info
# 1: Parse and log any tcp channel info and command info without message body
# 2: Parse and log channel info, command info and message body
proxyLogLevel=0
### ---Authorization --- ###
# Role names that are treated as "super-users," meaning that they will be able to perform all admin
# operations and publish/consume to/from all topics (as a comma-separated list)
superUserRoles=
# Whether authorization is enforced by the Pulsar proxy
authorizationEnabled=false
# Authorization provider as a fully qualified class name
authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
# Whether client authorization credentials are forwared to the broker for re-authorization.
# Authentication must be enabled via authenticationEnabled=true for this to take effect.
forwardAuthorizationCredentials=false
### --- Authentication --- ###
# Whether authentication is enabled for the Pulsar proxy
......@@ -50,11 +96,10 @@ authenticationEnabled=false
# Authentication provider name list (a comma-separated list of class names)
authenticationProviders=
# Whether authorization is enforced by the Pulsar proxy
authorizationEnabled=false
# When this parameter is not empty, unauthenticated users perform as anonymousUserRole
anonymousUserRole=
# Authorization provider as a fully qualified class name
authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
### --- Client Authentication --- ###
# The three brokerClient* authentication settings below are for the proxy itself and determine how it
# authenticates with Pulsar brokers
......@@ -68,15 +113,14 @@ brokerClientAuthenticationParameters=
# The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers
brokerClientTrustCertsFilePath=
# Role names that are treated as "super-users," meaning that they will be able to perform all admin
# operations and publish/consume to/from all topics (as a comma-separated list)
superUserRoles=
# Whether TLS is enabled when communicating with Pulsar brokers
tlsEnabledWithBroker=false
# Whether client authorization credentials are forwared to the broker for re-authorization.
# Authentication must be enabled via authenticationEnabled=true for this to take effect.
forwardAuthorizationCredentials=false
# Tls cert refresh duration in seconds (set 0 to check on every new connection)
tlsCertRefreshCheckDurationSec=300
##### --- Rate Limiting --- #####
# --- RateLimiting ----
# Max concurrent inbound connections. The proxy will reject requests beyond that.
maxConcurrentInboundConnections=10000
......@@ -85,12 +129,9 @@ maxConcurrentLookupRequests=50000
##### --- TLS --- #####
# Whether TLS is enabled for the proxy
# Deprecated - use servicePortTls and webServicePortTls instead
tlsEnabledInProxy=false
# Whether TLS is enabled when communicating with Pulsar brokers
tlsEnabledWithBroker=false
# Path for the TLS certificate file
tlsCertificateFilePath=
......@@ -112,10 +153,61 @@ tlsAllowInsecureConnection=false
# Whether the hostname is validated when the proxy creates a TLS connection with brokers
tlsHostnameVerificationEnabled=false
# Specify the tls protocols the broker will use to negotiate during TLS handshake
# (a comma-separated list of protocol names).
# Examples:- [TLSv1.2, TLSv1.1, TLSv1]
tlsProtocols=
# Specify the tls cipher the broker will use to negotiate during TLS Handshake
# (a comma-separated list of ciphers).
# Examples:- [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
tlsCiphers=
# Whether client certificates are required for TLS. Connections are rejected if the client
# certificate isn't trusted.
tlsRequireTrustedClientCertOnConnect=false
##### --- HTTP --- #####
# Http directs to redirect to non-pulsar services.
httpReverseProxyConfigs=
# Http output buffer size. The amount of data that will be buffered for http requests
# before it is flushed to the channel. A larger buffer size may result in higher http throughput
# though it may take longer for the client to see data.
# If using HTTP streaming via the reverse proxy, this should be set to the minimum value, 1,
# so that clients see the data as soon as possible.
httpOutputBufferSize=32768
# Number of threads to use for HTTP requests processing. Default is
# 2 * Runtime.getRuntime().availableProcessors()
httpNumThreads=
### --- Token Authentication Provider --- ###
## Symmetric key
# Configure the secret key to be used to validate auth tokens
# The key can be specified like:
# tokenSecretKey=data:;base64,xxxxxxxxx
# tokenSecretKey=file:///my/secret.key
tokenSecretKey=
## Asymmetric public/private key pair
# Configure the public key to be used to validate auth tokens
# The key can be specified like:
# tokenPublicKey=data:;base64,xxxxxxxxx
# tokenPublicKey=file:///my/public.key
tokenPublicKey=
# The token "claim" that will be interpreted as the authentication "role" or "principal" by AuthenticationProviderToken (defaults to "sub" if blank)
tokenAuthClaim=
# The token audience "claim" name, e.g. "aud", that will be used to get the audience from token.
# If not set, audience will not be verified.
tokenAudienceClaim=
# The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this.
tokenAudience=
### --- Deprecated config variables --- ###
......
......@@ -29,6 +29,11 @@ syncLimit=5
dataDir=data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the port at which the admin will listen
admin.enableServer=true
admin.serverPort=9990
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
......@@ -44,6 +49,12 @@ autopurge.snapRetainCount=3
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
# Requires updates to be synced to media of the transaction log before finishing
# processing the update. If this option is set to 'no', ZooKeeper will not require
# updates to be synced to the media.
# WARNING: it's not recommended to run a production ZK cluster with forceSync disabled.
forceSync=yes
{% for zk in groups['zookeeper'] %}
server.{{ hostvars[zk].zid }}={{ hostvars[zk].private_ip }}:2888:3888
{% endfor %}
......@@ -173,7 +173,11 @@ Remember to enter this command just only once. If you attempt to enter this comm
## Run the Pulsar playbook
Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. To do so, enter this command:
Once you have created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible.
(Optional) If you want to use any [built-in IO connectors](io-connectors.md) , edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file and uncomment the connectors you want to use.
To run the playbook, enter this command:
```bash
$ ansible-playbook \
......@@ -220,4 +224,3 @@ Once you are in the shell, enter the following command:
```
If all of these commands are successful, Pulsar clients can now use your cluster!
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册