[partintro] -- This guide describes the Apache Kafka implementation of the Spring Cloud Stream Binder. It contains information about its design, usage, and configuration options, as well as information on how the Stream Cloud Stream concepts map onto Apache Kafka specific constructs. In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream. -- == Apache Kafka Binder === Usage To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven: [source,xml] ---- org.springframework.cloud spring-cloud-stream-binder-kafka ---- Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown in the following example for Maven: [source,xml] ---- org.springframework.cloud spring-cloud-starter-stream-kafka ---- === Overview The following image shows a simplified diagram of how the Apache Kafka binder operates: .Kafka Binder image::{github-raw}/docs/src/main/asciidoc/images/kafka-binder.png[width=300,scaledwidth="50%"] The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka concept. Partitioning also maps directly to Apache Kafka partitions as well. The binder currently uses the Apache Kafka `kafka-clients` version `2.3.1`. This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available. For example, with versions earlier than 0.11.x.x, native headers are not supported. Also, 0.11.x.x does not support the `autoAddPartitions` property. === Configuration Options This section contains the configuration options used by the Apache Kafka binder. For common configuration options and properties pertaining to the binder, see the https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#binding-properties[binding properties] in core documentation. ==== Kafka Binder Properties spring.cloud.stream.kafka.binder.brokers:: A list of brokers to which the Kafka binder connects. + Default: `localhost`. spring.cloud.stream.kafka.binder.defaultBrokerPort:: `brokers` allows hosts specified with or without port information (for example, `host1,host2:port2`). This sets the default port when no port is configured in the broker list. + Default: `9092`. spring.cloud.stream.kafka.binder.configuration:: Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder. Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties -- for example, security settings. Unknown Kafka producer or consumer properties provided through this configuration are filtered out and not allowed to propagate. Properties here supersede any properties set in boot. + Default: Empty map. spring.cloud.stream.kafka.binder.consumerProperties:: Key/Value map of arbitrary Kafka client consumer properties. In addition to support known Kafka consumer properties, unknown consumer properties are allowed here as well. Properties here supersede any properties set in boot and in the `configuration` property above. + Default: Empty map. spring.cloud.stream.kafka.binder.headers:: The list of custom headers that are transported by the binder. Only required when communicating with older applications (<= 1.3.x) with a `kafka-clients` version < 0.11.0.0. Newer versions support headers natively. + Default: empty. spring.cloud.stream.kafka.binder.healthTimeout:: The time to wait to get partition information, in seconds. Health reports as down if this timer expires. + Default: 10. spring.cloud.stream.kafka.binder.requiredAcks:: The number of required acks on the broker. See the Kafka documentation for the producer `acks` property. + Default: `1`. spring.cloud.stream.kafka.binder.minPartitionCount:: Effective only if `autoCreateTopics` or `autoAddPartitions` is set. The global minimum number of partitions that the binder configures on topics on which it produces or consumes data. It can be superseded by the `partitionCount` setting of the producer or by the value of `instanceCount * concurrency` settings of the producer (if either is larger). + Default: `1`. spring.cloud.stream.kafka.binder.producerProperties:: Key/Value map of arbitrary Kafka client producer properties. In addition to support known Kafka producer properties, unknown producer properties are allowed here as well. Properties here supersede any properties set in boot and in the `configuration` property above. + Default: Empty map. spring.cloud.stream.kafka.binder.replicationFactor:: The replication factor of auto-created topics if `autoCreateTopics` is active. Can be overridden on each binding. + NOTE: If you are using Kafka broker versions prior to 2.4, then this value should be set to at least `1`. Starting with version 3.0.8, the binder uses `-1` as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas. Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that's the case then, typically, the `default.replication.factor` will match that value and `-1` should be used, unless you need a replication factor greater than the minimum. + Default: `-1`. spring.cloud.stream.kafka.binder.autoCreateTopics:: If set to `true`, the binder creates new topics automatically. If set to `false`, the binder relies on the topics being already configured. In the latter case, if the topics do not exist, the binder fails to start. + NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it. If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings. + Default: `true`. spring.cloud.stream.kafka.binder.autoAddPartitions:: If set to `true`, the binder creates new partitions if required. If set to `false`, the binder relies on the partition size of the topic being already configured. If the partition count of the target topic is smaller than the expected value, the binder fails to start. + Default: `false`. spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix:: Enables transactions in the binder. See `transaction.id` in the Kafka documentation and https://docs.spring.io/spring-kafka/reference/html/_reference.html#transactions[Transactions] in the `spring-kafka` documentation. When transactions are enabled, individual `producer` properties are ignored and all producers use the `spring.cloud.stream.kafka.binder.transaction.producer.*` properties. + Default `null` (no transactions) spring.cloud.stream.kafka.binder.transaction.producer.*:: Global producer properties for producers in a transactional binder. See `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` and <> and the general producer properties supported by all binders. + Default: See individual producer properties. spring.cloud.stream.kafka.binder.headerMapperBeanName:: The bean name of a `KafkaHeaderMapper` used for mapping `spring-messaging` headers to and from Kafka headers. Use this, for example, if you wish to customize the trusted packages in a `BinderHeaderMapper` bean that uses JSON deserialization for the headers. If this custom `BinderHeaderMapper` bean is not made available to the binder using this property, then the binder will look for a header mapper bean with the name `kafkaBinderHeaderMapper` that is of type `BinderHeaderMapper` before falling back to a default `BinderHeaderMapper` created by the binder. + Default: none. spring.cloud.stream.kafka.binder.considerDownWhenAnyPartitionHasNoLeader:: Flag to set the binder health as `down`, when any partitions on the topic, regardless of the consumer that is receiving data from it, is found without a leader. + Default: `false`. spring.cloud.stream.kafka.binder.certificateStoreDirectory:: When the truststore or keystore certificate location is given as a classpath URL (`classpath:...`), the binder copies the resource from the classpath location inside the JAR file to a location on the filesystem. This is true for both broker level certificates (`ssl.truststore.location` and `ssl.keystore.location`) and certificates intended for schema registry (`schema.registry.ssl.truststore.location` and `schema.registry.ssl.keystore.location`). Keep in mind that the truststore and keystore classpath locations must be provided under `spring.cloud.stream.kafka.binder.configuration...`. For example, `spring.cloud.stream.kafka.binder.configuration.ssl.truststore.location`, ``spring.cloud.stream.kafka.binder.configuration.schema.registry.ssl.truststore.location`, etc. The file will be moved to the location specified as the value for this property which must be an existing directory on the filesystem that is writable by the process running the application. If this value is not set and the certificate file is a classpath resource, then it will be moved to System's temp directory as returned by `System.getProperty("java.io.tmpdir")`. This is also true, if this value is present, but the directory cannot be found on the filesystem or is not writable. + Default: none. [[kafka-consumer-properties]] ==== Kafka Consumer Properties NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.consumer.=`. The following properties are available for Kafka consumers only and must be prefixed with `spring.cloud.stream.kafka.bindings..consumer.`. admin.configuration:: Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version. admin.replicas-assignment:: Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version. admin.replication-factor:: Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version. autoRebalanceEnabled:: When `true`, topic partitions is automatically rebalanced between the members of a consumer group. When `false`, each consumer is assigned a fixed set of partitions based on `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex`. This requires both the `spring.cloud.stream.instanceCount` and `spring.cloud.stream.instanceIndex` properties to be set appropriately on each launched instance. The value of the `spring.cloud.stream.instanceCount` property must typically be greater than 1 in this case. + Default: `true`. ackEachRecord:: When `autoCommitOffset` is `true`, this setting dictates whether to commit the offset after each record is processed. By default, offsets are committed after all records in the batch of records returned by `consumer.poll()` have been processed. The number of records returned by a poll can be controlled with the `max.poll.records` Kafka property, which is set through the consumer `configuration` property. Setting this to `true` may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs. Also, see the binder `requiredAcks` property, which also affects the performance of committing offsets. This property is deprecated as of 3.1 in favor of using `ackMode`. If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used. + Default: `false`. autoCommitOffset:: Starting with version 3.1, this property is deprecated. See `ackMode` for more details on alternatives. Whether to autocommit offsets when a message has been processed. If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message. Applications may use this header for acknowledging messages. See the examples section for details. When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records. Also see `ackEachRecord`. + Default: `true`. ackMode:: Specify the container ack mode. This is based on the AckMode enumeration defined in Spring Kafka. If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property. autoCommitOnError:: In pollable consumers, if set to `true`, it always auto commits on error. If not set (the default) or false, it will not auto commit in pollable consumers. Note that this property is only applicable for pollable consumers. + Default: not set. resetOffsets:: Whether to reset offsets on the consumer to the value provided by startOffset. Must be false if a `KafkaBindingRebalanceListener` is provided; see <>. See <> for more information about this property. + Default: `false`. startOffset:: The starting offset for new groups. Allowed values: `earliest` and `latest`. If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings..group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group. See <> for more information about this property. + Default: null (equivalent to `earliest`). enableDlq:: When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named `error..`. The DLQ topic name can be configurable by setting the `dlqName` property or by defining a `@Bean` of type `DlqDestinationResolver`. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See <> processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: `x-original-topic`, `x-exception-message`, and `x-exception-stacktrace` as `byte[]`. By default, a failed record is sent to the same partition number in the DLQ topic as the original record. See <> for how to change that behavior. **Not allowed when `destinationIsPattern` is `true`.** + Default: `false`. dlqPartitions:: When `enableDlq` is true, and this property is not set, a dead letter topic with the same number of partitions as the primary topic(s) is created. Usually, dead-letter records are sent to the same partition in the dead-letter topic as the original record. This behavior can be changed; see <>. If this property is set to `1` and there is no `DqlPartitionFunction` bean, all dead-letter records will be written to partition `0`. If this property is greater than `1`, you **MUST** provide a `DlqPartitionFunction` bean. Note that the actual partition count is affected by the binder's `minPartitionCount` property. + Default: `none` configuration:: Map with a key/value pair containing generic Kafka consumer properties. In addition to having Kafka consumer properties, other configuration properties can be passed here. For example some properties needed by the application such as `spring.cloud.stream.kafka.bindings.input.consumer.configuration.foo=bar`. The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters. + Default: Empty map. dlqName:: The name of the DLQ topic to receive the error messages. + Default: null (If not specified, messages that result in errors are forwarded to a topic named `error..`). dlqProducerProperties:: Using this, DLQ-specific producer properties can be set. All the properties available through kafka producer properties can be set through this property. When native decoding is enabled on the consumer (i.e., useNativeDecoding: true) , the application must provide corresponding key/value serializers for DLQ. This must be provided in the form of `dlqProducerProperties.configuration.key.serializer` and `dlqProducerProperties.configuration.value.serializer`. + Default: Default Kafka producer properties. standardHeaders:: Indicates which standard headers are populated by the inbound channel adapter. Allowed values: `none`, `id`, `timestamp`, or `both`. Useful if using native deserialization and the first component to receive a message needs an `id` (such as an aggregator that is configured to use a JDBC message store). + Default: `none` converterBeanName:: The name of a bean that implements `RecordMessageConverter`. Used in the inbound channel adapter to replace the default `MessagingMessageConverter`. + Default: `null` idleEventInterval:: The interval, in milliseconds, between events indicating that no messages have recently been received. Use an `ApplicationListener` to receive these events. See <> for a usage example. + Default: `30000` destinationIsPattern:: When true, the destination is treated as a regular expression `Pattern` used to match topic names by the broker. When true, topics are not provisioned, and `enableDlq` is not allowed, because the binder does not know the topic names during the provisioning phase. Note, the time taken to detect new topics that match the pattern is controlled by the consumer property `metadata.max.age.ms`, which (at the time of writing) defaults to 300,000ms (5 minutes). This can be configured using the `configuration` property above. + Default: `false` topic.properties:: A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.input.consumer.topic.properties.message.format.version=0.9.0.0` + Default: none. topic.replicas-assignment:: A Map> of replica assignments, with the key being the partition and the value being the assignments. Used when provisioning new topics. See the `NewTopic` Javadocs in the `kafka-clients` jar. + Default: none. topic.replication-factor:: The replication factor to use when provisioning topics. Overrides the binder-wide setting. Ignored if `replicas-assignments` is present. + Default: none (the binder-wide default of -1 is used). pollTimeout:: Timeout used for polling in pollable consumers. + Default: 5 seconds. transactionManager:: Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding. Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`. To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager. + Default: none. txCommitRecovered:: When using a transactional binder, the offset of a recovered record (e.g. when retries are exhausted and the record is sent to a dead letter topic) will be committed via a new transaction, by default. Setting this property to `false` suppresses committing the offset of recovered record. + Default: true. commonErrorHandlerBeanName:: `CommonErrorHandler` bean name to use per consumer binding. When present, this user provided `CommonErrorHandler` takes precedence over any other error handlers defined by the binder. This is a handy way to express error handlers, if the application does not want to use a `ListenerContainerCustomizer` and then check the destination/group combination to set an error handler. + Default: none. [[reset-offsets]] ==== Resetting Offsets When an application starts, the initial position in each assigned partition depends on two properties `startOffset` and `resetOffsets`. If `resetOffsets` is `false`, normal Kafka consumer https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset[`auto.offset.reset`] semantics apply. i.e. If there is no committed offset for a partition for the binding's consumer group, the position is `earliest` or `latest`. By default, bindings with an explicit `group` use `earliest`, and anonymous bindings (with no `group`) use `latest`. These defaults can be overridden by setting the `startOffset` binding property. There will be no committed offset(s) the first time the binding is started with a particular `group`. The other condition where no committed offset exists is if the offset has been expired. With modern brokers (since 2.1), and default broker properties, the offsets are expired 7 days after the last member leaves the group. See the https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes[`offsets.retention.minutes`] broker property for more information. When `resetOffsets` is `true`, the binder applies similar semantics to those that apply when there is no committed offset on the broker, as if this binding has never consumed from the topic; i.e. any current committed offset is ignored. Following are two use cases when this might be used. 1. Consuming from a compacted topic containing key/value pairs. Set `resetOffsets` to `true` and `startOffset` to `earliest`; the binding will perform a `seekToBeginning` on all newly assigned partitions. 2. Consuming from a topic containing events, where you are only interested in events that occur while this binding is running. Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will perform a `seekToEnd` on all newly assigned partitions. IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment. For more control over topic offsets, see <>; when a listener is provided, `resetOffsets` should not be set to `true`, otherwise, that will cause an error. ==== Consuming Batches Starting with version 3.0, when `spring.cloud.stream.binding..consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List` to the listener method. Otherwise, the method will be called with one record at a time. The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information. IMPORTANT: Retry within the binder is not supported when using batch mode, so `maxAttempts` will be overridden to 1. You can configure a `SeekToCurrentBatchErrorHandler` (using a `ListenerContainerCustomizer`) to achieve similar functionality to retry in the binder. You can also use a manual `AckMode` and call `Ackowledgment.nack(index, sleep)` to commit the offsets for a partial batch and have the remaining records redelivered. Refer to the https://docs.spring.io/spring-kafka/docs/2.3.0.BUILD-SNAPSHOT/reference/html/#committing-offsets[Spring for Apache Kafka documentation] for more information about these techniques. [[kafka-producer-properties]] ==== Kafka Producer Properties NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.kafka.default.producer.=`. The following properties are available for Kafka producers only and must be prefixed with `spring.cloud.stream.kafka.bindings..producer.`. admin.configuration:: Since version 2.1.1, this property is deprecated in favor of `topic.properties`, and support for it will be removed in a future version. admin.replicas-assignment:: Since version 2.1.1, this property is deprecated in favor of `topic.replicas-assignment`, and support for it will be removed in a future version. admin.replication-factor:: Since version 2.1.1, this property is deprecated in favor of `topic.replication-factor`, and support for it will be removed in a future version. bufferSize:: Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending. + Default: `16384`. sync:: Whether the producer is synchronous. + Default: `false`. sendTimeoutExpression:: A SpEL expression evaluated against the outgoing message used to evaluate the time to wait for ack when synchronous publish is enabled -- for example, `headers['mySendTimeout']`. The value of the timeout is in milliseconds. With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`. Now, the expression is evaluated before the payload is converted. + Default: `none`. batchTimeout:: How long the producer waits to allow more messages to accumulate in the same batch before sending the messages. (Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency. + Default: `0`. messageKeyExpression:: A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`. With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`. Now, the expression is evaluated before the payload is converted. In the case of a regular processor (`Function` or `Function, Message`), if the produced key needs to be same as the incoming key from the topic, this property can be set as below. `spring.cloud.stream.kafka.bindings..producer.messageKeyExpression: headers['kafka_receivedMessageKey']` There is an important caveat to keep in mind for reactive functions. In that case, it is up to the application to manually copy the headers from the incoming messages to outbound messages. You can set the header, e.g. `myKey` and use `headers['myKey']` as suggested above or, for convenience, simply set the `KafkaHeaders.MESSAGE_KEY` header, and you do not need to set this property at all. + Default: `none`. headerPatterns:: A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka `Headers` in the `ProducerRecord`. Patterns can begin or end with the wildcard character (asterisk). Patterns can be negated by prefixing with `!`. Matching stops after the first match (positive or negative). For example `!ask,as*` will pass `ash` but not `ask`. `id` and `timestamp` are never mapped. + Default: `*` (all headers - except the `id` and `timestamp`) configuration:: Map with a key/value pair containing generic Kafka producer properties. The `bootstrap.servers` property cannot be set here; use multi-binder support if you need to connect to multiple clusters. + Default: Empty map. topic.properties:: A `Map` of Kafka topic properties used when provisioning new topics -- for example, `spring.cloud.stream.kafka.bindings.output.producer.topic.properties.message.format.version=0.9.0.0` + topic.replicas-assignment:: A Map> of replica assignments, with the key being the partition and the value being the assignments. Used when provisioning new topics. See the `NewTopic` Javadocs in the `kafka-clients` jar. + Default: none. topic.replication-factor:: The replication factor to use when provisioning topics. Overrides the binder-wide setting. Ignored if `replicas-assignments` is present. + Default: none (the binder-wide default of -1 is used). useTopicHeader:: Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message. If the header is not present, the default binding destination is used. + Default: `false`. recordMetadataChannel:: The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`. The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic. + `ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)` + Failed sends go the producer error channel (if configured); see <>. + Default: null. NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used). Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used. If a topic already exists with a smaller partition count and `autoAddPartitions` is disabled (the default), the binder fails to start. If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added. If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used. compression:: Set the `compression.type` producer property. Supported values are `none`, `gzip`, `snappy`, `lz4` and `zstd`. If you override the `kafka-clients` jar to 2.1.0 (or later), as discussed in the https://docs.spring.io/spring-kafka/docs/2.2.x/reference/html/deps-for-21x.html[Spring for Apache Kafka documentation], and wish to use `zstd` compression, use `spring.cloud.stream.kafka.bindings..producer.configuration.compression.type=zstd`. + Default: `none`. transactionManager:: Bean name of a `KafkaAwareTransactionManager` used to override the binder's transaction manager for this binding. Usually needed if you want to synchronize another transaction with the Kafka transaction, using the `ChainedKafkaTransactionManaager`. To achieve exactly once consumption and production of records, the consumer and producer bindings must all be configured with the same transaction manager. + Default: none. closeTimeout:: Timeout in number of seconds to wait for when closing the producer. + Default: `30` allowNonTransactional:: Normally, all output bindings associated with a transactional binder will publish in a new transaction, if one is not already in process. This property allows you to override that behavior. If set to true, records published to this output binding will not be run in a transaction, unless one is already in process. + Default: `false` ==== Usage examples In this section, we show the use of the preceding properties for specific scenarios. ===== Example: Setting `ackMode` to `MANUAL` and Relying on Manual Acknowledgement This example illustrates how one may manually acknowledge offsets in a consumer application. This example requires that `spring.cloud.stream.kafka.bindings.input.consumer.ackMode` be set to `MANUAL`. Use the corresponding input channel name for your example. [source] ---- @SpringBootApplication @EnableBinding(Sink.class) public class ManuallyAcknowdledgingConsumer { public static void main(String[] args) { SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args); } @StreamListener(Sink.INPUT) public void process(Message message) { Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class); if (acknowledgment != null) { System.out.println("Acknowledgment provided"); acknowledgment.acknowledge(); } } } ---- ===== Example: Security Configuration Apache Kafka 0.9 supports secure connections between client and brokers. To take advantage of this feature, follow the guidelines in the https://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 https://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation]. Use the `spring.cloud.stream.kafka.binder.configuration` option to set security properties for all clients created by the binder. For example, to set `security.protocol` to `SASL_SSL`, set the following property: [source] ---- spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL ---- All the other security properties can be set in a similar manner. When using Kerberos, follow the instructions in the https://kafka.apache.org/090/documentation.html#security_sasl_clientconfig[reference documentation] for creating and referencing the JAAS configuration. Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties. ====== Using JAAS Configuration Files The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties. The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file: [source,bash] ---- java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \ --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \ --spring.cloud.stream.bindings.input.destination=stream.ticktock \ --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT ---- ====== Using Spring Boot Properties As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties. The following properties can be used to configure the login context of the Kafka client: spring.cloud.stream.kafka.binder.jaas.loginModule:: The login module name. Not necessary to be set in normal cases. + Default: `com.sun.security.auth.module.Krb5LoginModule`. spring.cloud.stream.kafka.binder.jaas.controlFlag:: The control flag of the login module. + Default: `required`. spring.cloud.stream.kafka.binder.jaas.options:: Map with a key/value pair containing the login module options. + Default: Empty map. The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties: [source,bash] ---- java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \ --spring.cloud.stream.bindings.input.destination=stream.ticktock \ --spring.cloud.stream.kafka.binder.autoCreateTopics=false \ --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \ --spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \ --spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \ --spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \ --spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM ---- The preceding example represents the equivalent of the following JAAS file: [source] ---- KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_client.keytab" principal="kafka-client-1@EXAMPLE.COM"; }; ---- If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent. NOTE: Do not mix JAAS configuration files and Spring Boot properties in the same application. If the `-Djava.security.auth.login.config` system property is already present, Spring Cloud Stream ignores the Spring Boot properties. NOTE: Be careful when using the `autoCreateTopics` and `autoAddPartitions` with Kerberos. Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper. Consequently, relying on Spring Cloud Stream to create/modify topics may fail. In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling. ====== Multi-binder configuration and JAAS When connecting to multiple clusters in which each one requires separate JAAS configuration, then set the JAAS configuration using the property `sasl.jaas.config`. When this property is present in the applicaiton, it takes precedence over the other strategies mentioned above. See this https://cwiki.apache.org/confluence/display/KAFKA/KIP-85%3A+Dynamic+JAAS+configuration+for+Kafka+clients[KIP-85] for more details. For example, if you have two clusters in your application with separate JAAS configuration, then the following is a template that you can use: ``` spring.cloud.stream: binders: kafka1: type: kafka environment: spring: cloud: stream: kafka: binder: brokers: localhost:9092 configuration.sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"admin-secret\";" kafka2: type: kafka environment: spring: cloud: stream: kafka: binder: brokers: localhost:9093 configuration.sasl.jaas.config: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"user1\" password=\"user1-secret\";" kafka.binder: configuration: security.protocol: SASL_PLAINTEXT sasl.mechanism: PLAIN ``` Note that both the Kafka clusters, and the `sasl.jaas.config` values for each of them are different in the above configuration. See this https://github.com/spring-cloud/spring-cloud-stream-samples/tree/main/multi-binder-samples/kafka-multi-binder-jaas[sample application] for more details on how to setup and run such an application. [[pause-resume]] ===== Example: Pausing and Resuming the Consumer If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer. This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`. To resume, you need an `ApplicationListener` for `ListenerContainerIdleEvent` instances. The frequency at which events are published is controlled by the `idleEventInterval` property. Since the consumer is not thread-safe, you must call these methods on the calling thread. The following simple application shows how to pause and resume: [source, java] ---- @SpringBootApplication @EnableBinding(Sink.class) public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } @StreamListener(Sink.INPUT) public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer consumer) { System.out.println(in); consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0))); } @Bean public ApplicationListener idleListener() { return event -> { System.out.println(event); if (event.getConsumer().paused().size() > 0) { event.getConsumer().resume(event.getConsumer().paused()); } }; } } ---- [[kafka-transactional-binder]] === Transactional Binder Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`. When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction. When the listener exits normally, the listener container will send the offset to the transaction and commit it. A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored. IMPORTANT: Normal binder retries (and dead lettering) are not supported with transactions because the retries will run in the original transaction, which may be rolled back and any published records will be rolled back too. When retries are enabled (the common property `maxAttempts` is greater than zero) the retry properties are used to configure a `DefaultAfterRollbackProcessor` to enable retries at the container level. Similarly, instead of publishing dead-letter records within the transaction, this functionality is moved to the listener container, again via the `DefaultAfterRollbackProcessor` which runs after the main transaction has rolled back. If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it. ==== [source, java] ---- @Bean public PlatformTransactionManager transactionManager(BinderFactory binders, @Value("${unique.tx.id.per.instance}") String txId) { ProducerFactory pf = ((KafkaMessageChannelBinder) binders.getBinder(null, MessageChannel.class)).getTransactionalProducerFactory(); KafkaTransactionManager tm = new KafkaTransactionManager<>(pf); tm.setTransactionId(txId) return tm; } ---- ==== Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured. If more than one binder is configured, use the binder name to get the reference. Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager. Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example: ==== [source, java] ---- public static class Sender { @Transactional public void doInTransaction(MessageChannel output, List stuffToSend) { stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff))); } } ---- ==== If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`. IMPORTANT: If you deploy multiple instances of your application, each instance needs a unique `transactionIdPrefix`. [[kafka-error-channels]] === Error Channels Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel. See https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/current/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling[this section on error handling] for more information. The payload of the `ErrorMessage` for a send failure is a `KafkaSendFailureException` with properties: * `failedMessage`: The Spring Messaging `Message` that failed to be sent. * `record`: The raw `ProducerRecord` that was created from the `failedMessage` There is no automatic handling of producer exceptions (such as sending to a <>). You can consume these exceptions with your own Spring Integration flow. [[kafka-metrics]] === Kafka Metrics Kafka binder module exposes the following metrics: `spring.cloud.stream.binder.kafka.offset`: This metric indicates how many messages have not been yet consumed from a given binder's topic by a given consumer group. The metrics provided are based on the Micrometer library. The binder creates the `KafkaBinderMetrics` bean if Micrometer is on the classpath and no other such beans provided by the application. The metric contains the consumer group information, topic and the actual lag in committed offset from the latest offset on the topic. This metric is particularly useful for providing auto-scaling feedback to a PaaS platform. You can exclude `KafkaBinderMetrics` from creating the necessary infrastructure like consumers and then reporting the metrics by providing the following component in the application. ``` @Component class NoOpBindingMeters { NoOpBindingMeters(MeterRegistry registry) { registry.config().meterFilter( MeterFilter.denyNameStartsWith(KafkaBinderMetrics.OFFSET_LAG_METRIC_NAME)); } } ``` More details on how to suppress meters selectively can be found https://micrometer.io/docs/concepts#_meter_filters[here]. [[kafka-tombstones]] === Tombstone Records (null record values) When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key. To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument. ==== [source, java] ---- @StreamListener(Sink.INPUT) public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key, @Payload(required = false) Customer customer) { // customer is null if a tombstone record ... } ---- ==== [[rebalance-listener]] === Using a KafkaBindingRebalanceListener Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer. Starting with version 2.1, if you provide a single `KafkaBindingRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings. ==== [source, java] ---- public interface KafkaBindingRebalanceListener { /** * Invoked by the container before any pending offsets are committed. * @param bindingName the name of the binding. * @param consumer the consumer. * @param partitions the partitions. */ default void onPartitionsRevokedBeforeCommit(String bindingName, Consumer consumer, Collection partitions) { } /** * Invoked by the container after any pending offsets are committed. * @param bindingName the name of the binding. * @param consumer the consumer. * @param partitions the partitions. */ default void onPartitionsRevokedAfterCommit(String bindingName, Consumer consumer, Collection partitions) { } /** * Invoked when partitions are initially assigned or after a rebalance. * Applications might only want to perform seek operations on an initial assignment. * @param bindingName the name of the binding. * @param consumer the consumer. * @param partitions the partitions. * @param initial true if this is the initial assignment. */ default void onPartitionsAssigned(String bindingName, Consumer consumer, Collection partitions, boolean initial) { } } ---- ==== You cannot set the `resetOffsets` consumer property to `true` when you provide a rebalance listener. [[retry-and-dlq-processing]] === Retry and Dead Letter Processing By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer. There are situations where it is preferable to move this functionality to the listener container, such as: * The aggregate of retries and delays will exceed the consumer's `max.poll.interval.ms` property, potentially causing a partition rebalance. * You wish to publish the dead letter to a different Kafka cluster. * You wish to add retry listeners to the error handler. * ... To configure moving this functionality from the binder to the container, define a `@Bean` of type `ListenerContainerWithDlqAndRetryCustomizer`. This interface has the following methods: ==== [source, java] ---- /** * Configure the container. * @param container the container. * @param destinationName the destination name. * @param group the group. * @param dlqDestinationResolver a destination resolver for the dead letter topic (if * enableDlq). * @param backOff the backOff using retry properties (if configured). * @see #retryAndDlqInBinding(String, String) */ void configure(AbstractMessageListenerContainer container, String destinationName, String group, @Nullable BiFunction, Exception, TopicPartition> dlqDestinationResolver, @Nullable BackOff backOff); /** * Return false to move retries and DLQ from the binding to a customized error handler * using the retry metadata and/or a {@code DeadLetterPublishingRecoverer} when * configured via * {@link #configure(AbstractMessageListenerContainer, String, String, BiFunction, BackOff)}. * @param destinationName the destination name. * @param group the group. * @return true to disable retrie in the binding */ default boolean retryAndDlqInBinding(String destinationName, String group) { return true; } ---- ==== The destination resolver and `BackOff` are created from the binding properties (if configured). You can then use these to create a custom error handler and dead letter publisher; for example: ==== [source, java] ---- @Bean ListenerContainerWithDlqAndRetryCustomizer cust(KafkaTemplate template) { return new ListenerContainerWithDlqAndRetryCustomizer() { @Override public void configure(AbstractMessageListenerContainer container, String destinationName, String group, @Nullable BiFunction, Exception, TopicPartition> dlqDestinationResolver, @Nullable BackOff backOff) { if (destinationName.equals("topicWithLongTotalRetryConfig")) { ConsumerRecordRecoverer dlpr = new DeadLetterPublishingRecoverer(template), dlqDestinationResolver); container.setCommonErrorHandler(new DefaultErrorHandler(dlpr, backOff)); } } @Override public boolean retryAndDlqInBinding(String destinationName, String group) { return !destinationName.contains("topicWithLongTotalRetryConfig"); } }; } ---- ==== Now, only a single retry delay needs to be greater than the consumer's `max.poll.interval.ms` property. [[consumer-producer-config-customizer]] === Customizing Consumer and Producer configuration If you want advanced customization of consumer and producer configuration that is used for creating `ConsumerFactory` and `ProducerFactory` in Kafka, you can implement the following customizers. * ConsumerConfigCustomizer * ProducerConfigCustomizer Both of these interfaces provide a way to configure the config map used for consumer and producer properties. For example, if you want to gain access to a bean that is defined at the application level, you can inject that in the implementation of the `configure` method. When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories. Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties. [[admin-client-config-customization]] === Customizing AdminClient Configuration As with consumer and producer config customization above, applications can also customize the configuration for admin clients by providing an `AdminClientConfigCustomizer`. AdminClientConfigCustomizer's configure method provides access to the admin client properties, using which you can define further customization. Binder's Kafka topic provisioner gives the highest precedence for the properties given through this customizer. Here is an example of providing this customizer bean. ``` @Bean public AdminClientConfigCustomizer adminClientConfigCustomizer() { return props -> { props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL"); }; } ``` [[custom-kafka-binder-health-indicator]] === Custom Kafka Binder Health Indicator Kafka binder activates a default health indicator when Spring Boot actuator is on the classpath. This health indicator checks the health of the binder and any communication issues with the Kafka broker. If an application wants to disable this default health check implementation and include a custom implementation, then it can provide an implementation for `KafkaBinderHealth` interface. `KafkaBinderHealth` is a marker interface that extends from `HealthIndicator`. In the custom implementation, it must provide an implementation for the `health()` method. The custom implementation must be present in the application configuration as a bean. When the binder discovers the custom implementation, it will use that instead of the default implementation. Here is an example of such a custom implementation bean in the application. ``` @Bean public KafkaBinderHealth kafkaBinderHealthIndicator() { return new KafkaBinderHealth() { @Override public Health health() { // custom implementation details. } }; } ```