diff --git a/docs/en/spring-for-apache-kafka/spring-kafka.md b/docs/en/spring-for-apache-kafka/spring-kafka.md index 0287046898d38834e59d36c000d406b279d0f6f8..f8f4ad8c9ac116cf3ab384413136953f735f872e 100644 --- a/docs/en/spring-for-apache-kafka/spring-kafka.md +++ b/docs/en/spring-for-apache-kafka/spring-kafka.md @@ -1,19 +1,19 @@ # Spring for Apache Kafka -## [](#preface)1. Preface +## 1. Preface The Spring for Apache Kafka project applies core Spring concepts to the development of Kafka-based messaging solutions. We provide a “template” as a high-level abstraction for sending messages. We also provide support for Message-driven POJOs. -## [](#whats-new-part)2. What’s new? +## 2. What’s new? -### [](#spring-kafka-intro-new)2.1. What’s New in 2.8 Since 2.7 +### 2.1. What’s New in 2.8 Since 2.7 This section covers the changes made from version 2.7 to version 2.8. For changes in earlier version, see [[history]](#history). -#### [](#x28-kafka-client)2.1.1. Kafka Client Version +#### 2.1.1. Kafka Client Version This version requires the 3.0.0 `kafka-clients` @@ -22,7 +22,7 @@ This version requires the 3.0.0 `kafka-clients` See [Exactly Once Semantics](#exactly-once) and [KIP-447](https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+scalability+for+exactly+once+semantics) for more information. -#### [](#x28-packages)2.1.2. Package Changes +#### 2.1.2. Package Changes Classes and interfaces related to type mapping have been moved from `…​support.converter` to `…​support.mapping`. @@ -34,13 +34,13 @@ Classes and interfaces related to type mapping have been moved from `…​suppo * `Jackson2JavaTypeMapper` -#### [](#x28-ooo-commits)2.1.3. Out of Order Manual Commits +#### 2.1.3. Out of Order Manual Commits The listener container can now be configured to accept manual offset commits out of order (usually asynchronously). The container will defer the commit until the missing offset is acknowledged. See [Manually Committing Offsets](#ooo-commits) for more information. -#### [](#x28-batch-overrude)2.1.4. `@KafkaListener` Changes +#### 2.1.4. `@KafkaListener` Changes It is now possible to specify whether the listener method is a batch listener on the method itself. This allows the same container factory to be used for both record and batch listeners. @@ -54,17 +54,17 @@ See [Conversion Errors with Batch Error Handlers](#batch-listener-conv-errors) f `RecordFilterStrategy`, when used with batch listeners, can now filter the entire batch in one call. See the note at the end of [Batch Listeners](#batch-listeners) for more information. -#### [](#x28-template)2.1.5. `KafkaTemplate` Changes +#### 2.1.5. `KafkaTemplate` Changes You can now receive a single record, given the topic, partition and offset. See [Using `KafkaTemplate` to Receive](#kafka-template-receive) for more information. -#### [](#x28-eh)2.1.6. `CommonErrorHandler` Added +#### 2.1.6. `CommonErrorHandler` Added The legacy `GenericErrorHandler` and its sub-interface hierarchies for record an batch listeners have been replaced by a new single interface `CommonErrorHandler` with implementations corresponding to most legacy implementations of `GenericErrorHandler`. See [Container Error Handlers](#error-handlers) for more information. -#### [](#x28-lcc)2.1.7. Listener Container Changes +#### 2.1.7. Listener Container Changes The `interceptBeforeTx` container property is now `true` by default. @@ -73,18 +73,18 @@ Both exceptions are considered fatal and the container will stop by default, unl See [Using `KafkaMessageListenerContainer`](#kafka-container) and [Listener Container Properties](#container-props) for more information. -#### [](#x28-serializers)2.1.8. Serializer/Deserializer Changes +#### 2.1.8. Serializer/Deserializer Changes The `DelegatingByTopicSerializer` and `DelegatingByTopicDeserializer` are now provided. See [Delegating Serializer and Deserializer](#delegating-serialization) for more information. -#### [](#x28-dlpr)2.1.9. `DeadLetterPublishingRecover` Changes +#### 2.1.9. `DeadLetterPublishingRecover` Changes The property `stripPreviousExceptionHeaders` is now `true` by default. See [Managing Dead Letter Record Headers](#dlpr-headers) for more information. -#### [](#x28-retryable-topics-changes)2.1.10. Retryable Topics Changes +#### 2.1.10. Retryable Topics Changes Now you can use the same factory for retryable and non-retryable topics. See [Specifying a ListenerContainerFactory](#retry-topic-lcf) for more information. @@ -95,11 +95,11 @@ Refer to [Exception Classifier](#retry-topic-ex-classifier) to see how to manage The KafkaBackOffException thrown when using the retryable topics feature is now logged at DEBUG level. See [[change-kboe-logging-level]](#change-kboe-logging-level) if you need to change the logging level back to WARN or set it to any other level. -## [](#introduction)3. Introduction +## 3. Introduction This first part of the reference documentation is a high-level overview of Spring for Apache Kafka and the underlying concepts and some code snippets that can help you get up and running as quickly as possible. -### [](#quick-tour)3.1. Quick Tour +### 3.1. Quick Tour Prerequisites: You must install and run Apache Kafka. Then you must put the Spring for Apache Kafka (`spring-kafka`) JAR and all of its dependencies on your class path. @@ -143,7 +143,7 @@ compile 'org.springframework.kafka:spring-kafka' However, the quickest way to get started is to use [start.spring.io](https://start.spring.io) (or the wizards in Spring Tool Suits and Intellij IDEA) and create a project, selecting 'Spring for Apache Kafka' as a dependency. -#### [](#compatibility)3.1.1. Compatibility +#### 3.1.1. Compatibility This quick tour works with the following versions: @@ -153,14 +153,14 @@ This quick tour works with the following versions: * Minimum Java version: 8 -#### [](#getting-started)3.1.2. Getting Started +#### 3.1.2. Getting Started The simplest way to get started is to use [start.spring.io](https://start.spring.io) (or the wizards in Spring Tool Suits and Intellij IDEA) and create a project, selecting 'Spring for Apache Kafka' as a dependency. Refer to the [Spring Boot documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-kafka) for more information about its opinionated auto configuration of the infrastructure beans. Here is a minimal consumer application. -##### [](#spring-boot-consumer-app)Spring Boot Consumer App +##### Spring Boot Consumer App Example 1. Application @@ -217,7 +217,7 @@ spring.kafka.consumer.auto-offset-reset=earliest The `NewTopic` bean causes the topic to be created on the broker; it is not needed if the topic already exists. -##### [](#spring-boot-producer-app)Spring Boot Producer App +##### Spring Boot Producer App Example 3. Application @@ -270,7 +270,7 @@ class Application { } ``` -##### [](#with-java-configuration-no-spring-boot)With Java Configuration (No Spring Boot) +##### | |Spring for Apache Kafka is designed to be used in a Spring Application Context.
For example, if you create the listener container yourself outside of a Spring context, not all functions will work unless you satisfy all of the `…​Aware` interfaces that the container implements.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -435,17 +435,17 @@ class Config { As you can see, you have to define several infrastructure beans when not using Spring Boot. -## [](#reference)4. Reference +## 4. Reference This part of the reference documentation details the various components that comprise Spring for Apache Kafka. The [main chapter](#kafka) covers the core classes to develop a Kafka application with Spring. -### [](#kafka)4.1. Using Spring for Apache Kafka +### 4.1. Using Spring for Apache Kafka This section offers detailed explanations of the various concerns that impact using Spring for Apache Kafka. For a quick but less detailed introduction, see [Quick Tour](#quick-tour). -#### [](#connecting)4.1.1. Connecting to Kafka +#### 4.1.1. Connecting to Kafka * `KafkaAdmin` - see [Configuring Topics](#configuring-topics) @@ -467,7 +467,7 @@ When using `@KafkaListener` s, `stop()` and `start()` the `KafkaListenerEndpoint See the Javadocs for more information. -##### [](#factory-listeners)Factory Listeners +##### Factory Listeners Starting with version 2.5, the `DefaultKafkaProducerFactory` and `DefaultKafkaConsumerFactory` can be configured with a `Listener` to receive notifications whenever a producer or consumer is created or closed. @@ -505,7 +505,7 @@ These listeners can be used, for example, to create and bind a Micrometer `Kafka The framework provides listeners that do exactly that; see [Micrometer Native Metrics](#micrometer-native). -#### [](#configuring-topics)4.1.2. Configuring Topics +#### 4.1.2. Configuring Topics If you define a `KafkaAdmin` bean in your application context, it can automatically add topics to the broker. To do so, you can add a `NewTopic` `@Bean` for each topic to the application context. @@ -689,15 +689,15 @@ private KafkaAdmin admin; client.close(); ``` -#### [](#sending-messages)4.1.3. Sending Messages +#### 4.1.3. Sending Messages This section covers how to send messages. -##### [](#kafka-template)Using `KafkaTemplate` +##### Using `KafkaTemplate` This section covers how to use `KafkaTemplate` to send messages. -###### [](#overview)Overview +###### Overview The `KafkaTemplate` wraps a producer and provides convenience methods to send data to Kafka topics. The following listing shows the relevant methods from `KafkaTemplate`: @@ -890,7 +890,7 @@ If you wish to block the sending thread to await the result, you can invoke the You may wish to invoke `flush()` before waiting or, for convenience, the template has a constructor with an `autoFlush` parameter that causes the template to `flush()` on each send. Flushing is only needed if you have set the `linger.ms` producer property and want to immediately send a partial batch. -###### [](#examples)Examples +###### Examples This section shows examples of sending messages to Kafka: @@ -938,7 +938,7 @@ public void sendToKafka(final MyOutputData data) { Note that the cause of the `ExecutionException` is `KafkaProducerException` with the `failedProducerRecord` property. -##### [](#routing-template)Using `RoutingKafkaTemplate` +##### Using `RoutingKafkaTemplate` Starting with version 2.5, you can use a `RoutingKafkaTemplate` to select the producer at runtime, based on the destination `topic` name. @@ -989,7 +989,7 @@ The corresponding `@KafkaListener` s for this example are shown in [Annotation P For another technique to achieve similar results, but with the additional capability of sending different types to the same topic, see [Delegating Serializer and Deserializer](#delegating-serialization). -##### [](#producer-factory)Using `DefaultKafkaProducerFactory` +##### Using `DefaultKafkaProducerFactory` As seen in [Using `KafkaTemplate`](#kafka-template), a `ProducerFactory` is used to create the producer. @@ -1033,7 +1033,7 @@ void removeConfig(String configKey); Starting with version 2.8, if you provide serializers as objects (in the constructor or via the setters), the factory will invoke the `configure()` method to configure them with the configuration properties. -##### [](#replying-template)Using `ReplyingKafkaTemplate` +##### Using `ReplyingKafkaTemplate` Version 2.1.3 introduced a subclass of `KafkaTemplate` to provide request/reply semantics. The class is named `ReplyingKafkaTemplate` and has two additional methods; the following shows the method signatures: @@ -1248,7 +1248,7 @@ These header names are used by the `@KafkaListener` infrastructure to route the Starting with version 2.3, you can customize the header names - the template has 3 properties `correlationHeaderName`, `replyTopicHeaderName`, and `replyPartitionHeaderName`. This is useful if your server is not a Spring application (or does not use the `@KafkaListener`). -###### [](#exchanging-messages)Request/Reply with `Message` s +###### Request/Reply with `Message` s Version 2.7 added methods to the `ReplyingKafkaTemplate` to send and receive `spring-messaging` 's `Message` abstraction: @@ -1343,7 +1343,7 @@ val things = future2?.get(10, TimeUnit.SECONDS)?.payload things?.forEach(Consumer { thing1: Thing? -> log.info(thing1.toString()) }) ``` -##### [](#reply-message)Reply Type Message\ +##### Reply Type Message\ When the `@KafkaListener` returns a `Message`, with versions before 2.5, it was necessary to populate the reply topic and correlation id headers. In this example, we use the reply topic header from the request: @@ -1375,7 +1375,7 @@ public Message messageReturn(String in) { } ``` -##### [](#aggregating-request-reply)Aggregating Multiple Replies +##### Aggregating Multiple Replies The template in [Using `ReplyingKafkaTemplate`](#replying-template) is strictly for a single request/reply scenario. For cases where multiple receivers of a single message return a reply, you can use the `AggregatingReplyingKafkaTemplate`. @@ -1429,11 +1429,11 @@ The real `ConsumerRecord` s in the `Collection` contain the actual topic(s) from | |If you use an [`ErrorHandlingDeserializer`](#error-handling-deserializer) with this aggregating template, the framework will not automatically detect `DeserializationException` s.
Instead, the record (with a `null` value) will be returned intact, with the deserialization exception(s) in headers.
It is recommended that applications call the utility method `ReplyingKafkaTemplate.checkDeserialization()` method to determine if a deserialization exception occurred.
See its javadocs for more information.
The `replyErrorChecker` is also not called for this aggregating template; you should perform the checks on each element of the reply.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#receiving-messages)4.1.4. Receiving Messages +#### 4.1.4. Receiving Messages You can receive messages by configuring a `MessageListenerContainer` and providing a message listener or by using the `@KafkaListener` annotation. -##### [](#message-listeners)Message Listeners +##### Message Listeners When you use a [message listener container](#message-listener-container), you must provide a listener to receive data. There are currently eight supported interfaces for message listeners. @@ -1505,7 +1505,7 @@ public interface BatchAcknowledgingConsumerAwareMessageListener extends Ba | |You should not execute any `Consumer` methods that affect the consumer’s positions and or committed offsets in your listener; the container needs to manage such information.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#message-listener-container)Message Listener Containers +##### Message Listener Containers Two `MessageListenerContainer` implementations are provided: @@ -1535,7 +1535,7 @@ Starting with versions 2.3.8, 2.4.6, the `ConcurrentMessageListenerContainer` no The `group.instance.id` is suffixed with `-n` with `n` starting at `1`. This, together with an increased `session.timeout.ms`, can be used to reduce rebalance events, for example, when application instances are restarted. -###### [](#kafka-container)Using `KafkaMessageListenerContainer` +###### Using `KafkaMessageListenerContainer` The following constructor is available: @@ -1615,7 +1615,7 @@ Defining `authExceptionRetryInterval` allows the container to recover when prope Starting with version 2.8, when creating the consumer factory, if you provide deserializers as objects (in the constructor or via the setters), the factory will invoke the `configure()` method to configure them with the configuration properties. -###### [](#using-ConcurrentMessageListenerContainer)Using `ConcurrentMessageListenerContainer` +###### Using `ConcurrentMessageListenerContainer` The single constructor is similar to the `KafkaListenerContainer` constructor. The following listing shows the constructor’s signature: @@ -1649,7 +1649,7 @@ The metrics are grouped into the `Map` by the `cli Starting with version 2.3, the `ContainerProperties` provides an `idleBetweenPolls` option to let the main loop in the listener container to sleep between `KafkaConsumer.poll()` calls. An actual sleep interval is selected as the minimum from the provided option and difference between the `max.poll.interval.ms` consumer config and the current records batch processing time. -###### [](#committing-offsets)Committing Offsets +###### Committing Offsets Several options are provided for committing offsets. If the `enable.auto.commit` consumer property is `true`, Kafka auto-commits the offsets according to its configuration. @@ -1722,14 +1722,14 @@ See [Container Error Handlers](#error-handlers) for more information. | |When using partition assignment via group management, it is important to ensure the `sleep` argument (plus the time spent processing records from the previous poll) is less than the consumer `max.poll.interval.ms` property.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#container-auto-startup)Listener Container Auto Startup +###### Listener Container Auto Startup The listener containers implement `SmartLifecycle`, and `autoStartup` is `true` by default. The containers are started in a late phase (`Integer.MAX-VALUE - 100`). Other components that implement `SmartLifecycle`, to handle data from listeners, should be started in an earlier phase. The `- 100` leaves room for later phases to enable components to be auto-started after the containers. -##### [](#ooo-commits)Manually Committing Offsets +##### Manually Committing Offsets Normally, when using `AckMode.MANUAL` or `AckMode.MANUAL_IMMEDIATE`, the acknowledgments must be acknowledged in order, because Kafka does not maintain state for each record, only a committed offset for each group/partition. Starting with version 2.8, you can now set the container property `asyncAcks`, which allows the acknowledgments for records returned by the poll to be acknowledged in any order. @@ -1739,7 +1739,7 @@ The consumer will be paused (no new records delivered) until all the offsets for | |While this feature allows applications to process records asynchronously, it should be understood that it increases the possibility of duplicate deliveries after a failure.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#kafka-listener-annotation)`@KafkaListener` Annotation +##### `@KafkaListener` Annotation The `@KafkaListener` annotation is used to designate a bean method as a listener for a listener container. The bean is wrapped in a `MessagingMessageListenerAdapter` configured with various features, such as converters to convert the data, if necessary, to match the method parameters. @@ -1747,7 +1747,7 @@ The bean is wrapped in a `MessagingMessageListenerAdapter` configured with vario You can configure most attributes on the annotation with SpEL by using `#{…​}` or property placeholders (`${…​}`). See the [Javadoc](https://docs.spring.io/spring-kafka/api/org/springframework/kafka/annotation/KafkaListener.html) for more information. -###### [](#record-listener)Record Listeners +###### Record Listeners The `@KafkaListener` annotation provides a mechanism for simple POJO listeners. The following example shows how to use it: @@ -1816,7 +1816,7 @@ public void listen(String data) { } ``` -###### [](#manual-assignment)Explicit Partition Assignment +###### Explicit Partition Assignment You can also configure POJO listeners with explicit topics and partitions (and, optionally, their initial offsets). The following example shows how to do so: @@ -1881,7 +1881,7 @@ public void listen(ConsumerRecord record) { The initial offset will be applied to all 6 partitions. -###### [](#manual-acknowledgment)Manual Acknowledgment +###### Manual Acknowledgment When using manual `AckMode`, you can also provide the listener with the `Acknowledgment`. The following example also shows how to use a different container factory. @@ -1895,7 +1895,7 @@ public void listen(String data, Acknowledgment ack) { } ``` -###### [](#consumer-record-metadata)Consumer Record Metadata +###### Consumer Record Metadata Finally, metadata about the record is available from message headers. You can use the following header names to retrieve the headers of the message: @@ -1940,7 +1940,7 @@ public void listen(String str, ConsumerRecordMetadata meta) { This contains all the data from the `ConsumerRecord` except the key and value. -###### [](#batch-listeners)Batch Listeners +###### Batch Listeners Starting with version 1.1, you can configure `@KafkaListener` methods to receive the entire batch of consumer records received from the consumer poll. To configure the listener container factory to create batch listeners, you can set the `batchListener` property. @@ -2037,7 +2037,7 @@ public void pollResults(ConsumerRecords records) { | |If the container factory has a `RecordFilterStrategy` configured, it is ignored for `ConsumerRecords` listeners, with a `WARN` log message emitted.
Records can only be filtered with a batch listener if the `>` form of listener is used.
By default, records are filtered one-at-a-time; starting with version 2.8, you can override `filterBatch` to filter the entire batch in one call.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#annotation-properties)Annotation Properties +###### Annotation Properties Starting with version 2.0, the `id` property (if present) is used as the Kafka consumer `group.id` property, overriding the configured property in the consumer factory, if present. You can also set `groupId` explicitly or set `idIsGroup` to false to restore the previous behavior of using the consumer factory `group.id`. @@ -2126,7 +2126,7 @@ public void listen2(byte[] in) { } ``` -##### [](#listener-group-id)Obtaining the Consumer `group.id` +##### Obtaining the Consumer `group.id` When running the same listener code in multiple containers, it may be useful to be able to determine which container (identified by its `group.id` consumer property) that a record came from. @@ -2144,7 +2144,7 @@ public void listener(@Payload String foo, | |This is available in record listeners and batch listeners that receive a `List` of records.
It is **not** available in a batch listener that receives a `ConsumerRecords` argument.
Use the `KafkaUtils` mechanism in that case.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#container-thread-naming)Container Thread Naming +##### Container Thread Naming Listener containers currently use two task executors, one to invoke the consumer and another that is used to invoke the listener when the kafka consumer property `enable.auto.commit` is `false`. You can provide custom executors by setting the `consumerExecutor` and `listenerExecutor` properties of the container’s `ContainerProperties`. @@ -2156,7 +2156,7 @@ This executor creates threads with names similar to `-C-1` (consumer t For the `ConcurrentMessageListenerContainer`, the `` part of the thread name becomes `-m`, where `m` represents the consumer instance.`n` increments each time the container is started. So, with a bean name of `container`, threads in this container will be named `container-0-C-1`, `container-1-C-1` etc., after the container is started the first time; `container-0-C-2`, `container-1-C-2` etc., after a stop and subsequent start. -##### [](#kafka-listener-meta)`@KafkaListener` as a Meta Annotation +##### `@KafkaListener` as a Meta Annotation Starting with version 2.2, you can now use `@KafkaListener` as a meta annotation. The following example shows how to do so: @@ -2189,7 +2189,7 @@ public void listen1(String in) { } ``` -##### [](#class-level-kafkalistener)`@KafkaListener` on a Class +##### `@KafkaListener` on a Class When you use `@KafkaListener` at the class-level, you must specify `@KafkaHandler` at the method level. When messages are delivered, the converted message payload type is used to determine which method to call. @@ -2247,7 +2247,7 @@ void listen(Object in, @Header(KafkaHeaders.RECORD_METADATA) ConsumerRecordMetad } ``` -##### [](#kafkalistener-attrs)`@KafkaListener` Attribute Modification +##### `@KafkaListener` Attribute Modification Starting with version 2.7.2, you can now programmatically modify annotation attributes before the container is created. To do so, add one or more `KafkaListenerAnnotationBeanPostProcessor.AnnotationEnhancer` to the application context.`AnnotationEnhancer` is a `BiFunction, AnnotatedElement, Map` and must return a map of attributes. @@ -2272,7 +2272,7 @@ public static AnnotationEnhancer groupIdEnhancer() { } ``` -##### [](#kafkalistener-lifecycle)`@KafkaListener` Lifecycle Management +##### `@KafkaListener` Lifecycle Management The listener containers created for `@KafkaListener` annotations are not beans in the application context. Instead, they are registered with an infrastructure bean of type `KafkaListenerEndpointRegistry`. @@ -2307,7 +2307,7 @@ A collection of managed containers can be obtained by calling the registry’s ` Version 2.2.5 added a convenience method `getAllListenerContainers()`, which returns a collection of all containers, including those managed by the registry and those declared as beans. The collection returned will include any prototype beans that have been initialized, but it will not initialize any lazy bean declarations. -##### [](#kafka-validation)`@KafkaListener` `@Payload` Validation +##### `@KafkaListener` `@Payload` Validation Starting with version 2.2, it is now easier to add a `Validator` to validate `@KafkaListener` `@Payload` arguments. Previously, you had to configure a custom `DefaultMessageHandlerMethodFactory` and add it to the registrar. @@ -2385,7 +2385,7 @@ public KafkaListenerErrorHandler validationErrorHandler() { Starting with version 2.5.11, validation now works on payloads for `@KafkaHandler` methods in a class-level listener. See [`@KafkaListener` on a Class](#class-level-kafkalistener). -##### [](#rebalance-listeners)Rebalancing Listeners +##### Rebalancing Listeners `ContainerProperties` has a property called `consumerRebalanceListener`, which takes an implementation of the Kafka client’s `ConsumerRebalanceListener` interface. If this property is not provided, the container configures a logging listener that logs rebalance events at the `INFO` level. @@ -2438,7 +2438,7 @@ containerProperties.setConsumerRebalanceListener(new ConsumerAwareRebalanceListe | |Starting with version 2.4, a new method `onPartitionsLost()` has been added (similar to a method with the same name in `ConsumerRebalanceLister`).
The default implementation on `ConsumerRebalanceLister` simply calls `onPartionsRevoked`.
The default implementation on `ConsumerAwareRebalanceListener` does nothing.
When supplying the listener container with a custom listener (of either type), it is important that your implementation not call `onPartitionsRevoked` from `onPartitionsLost`.
If you implement `ConsumerRebalanceListener` you should override the default method.
This is because the listener container will call its own `onPartitionsRevoked` from its implementation of `onPartitionsLost` after calling the method on your implementation.
If you implementation delegates to the default behavior, `onPartitionsRevoked` will be called twice each time the `Consumer` calls that method on the container’s listener.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#annotation-send-to)Forwarding Listener Results using `@SendTo` +##### Forwarding Listener Results using `@SendTo` Starting with version 2.0, if you also annotate a `@KafkaListener` with a `@SendTo` annotation and the method invocation returns a result, the result is forwarded to the topic specified by the `@SendTo`. @@ -2580,7 +2580,7 @@ When using request/reply semantics, the target partition can be requested by the | |If a listener method returns an `Iterable`, by default a record for each element as the value is sent.
Starting with version 2.3.5, set the `splitIterables` property on `@KafkaListener` to `false` and the entire result will be sent as the value of a single `ProducerRecord`.
This requires a suitable serializer in the reply template’s producer configuration.
However, if the reply is `Iterable>` the property is ignored and each message is sent separately.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#filtering-messages)Filtering Messages +##### Filtering Messages In certain scenarios, such as rebalancing, a message that has already been processed may be redelivered. The framework cannot know whether such a message has been processed or not. @@ -2599,11 +2599,11 @@ In addition, a `FilteringBatchMessageListenerAdapter` is provided, for when you | |The `FilteringBatchMessageListenerAdapter` is ignored if your `@KafkaListener` receives a `ConsumerRecords` instead of `List>`, because `ConsumerRecords` is immutable.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#retrying-deliveries)Retrying Deliveries +##### Retrying Deliveries See the `DefaultErrorHandler` in [Handling Exceptions](#annotation-error-handling). -##### [](#sequencing)Starting `@KafkaListener` s in Sequence +##### Starting `@KafkaListener` s in Sequence A common use case is to start a listener after another listener has consumed all the records in a topic. For example, you may want to load the contents of one or more compacted topics into memory before processing records from other topics. @@ -2652,7 +2652,7 @@ As an aside; previously, containers in each group were added to a bean of type ` These collections are now deprecated in favor of beans of type `ContainerGroup` with a bean name that is the group name, suffixed with `.group`; in the example above, there would be 2 beans `g1.group` and `g2.group`. The `Collection` beans will be removed in a future release. -##### [](#kafka-template-receive)Using `KafkaTemplate` to Receive +##### Using `KafkaTemplate` to Receive This section covers how to use `KafkaTemplate` to receive messages. @@ -2673,87 +2673,87 @@ As you can see, you need to know the partition and offset of the record(s) you n With the last two methods, each record is retrieved individually and the results assembled into a `ConsumerRecords` object. When creating the `TopicPartitionOffset` s for the request, only positive, absolute offsets are supported. -#### [](#container-props)4.1.5. Listener Container Properties +#### 4.1.5. Listener Container Properties | Property | Default | Description | |---------------------------------------------------------------|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| []()[`ackCount`](#ackCount) | 1 | The number of records before committing pending offsets when the `ackMode` is `COUNT` or `COUNT_TIME`. | -| []()[`adviceChain`](#adviceChain) | `null` | A chain of `Advice` objects (e.g. `MethodInterceptor` around advice) wrapping the message listener, invoked in order. | -| []()[`ackMode`](#ackMode) | BATCH | Controls how often offsets are committed - see [Committing Offsets](#committing-offsets). | -| []()[`ackOnError`](#ackOnError) | `false` | [DEPRECATED in favor of `ErrorHandler.isAckAfterHandle()`] | -| []()[`ackTime`](#ackTime) | 5000 | The time in milliseconds after which pending offsets are committed when the `ackMode` is `TIME` or `COUNT_TIME`. | -| []()[`assignmentCommitOption`](#assignmentCommitOption) | LATEST\_ONLY \_NO\_TX | Whether or not to commit the initial position on assignment; by default, the initial offset will only be committed if the `ConsumerConfig.AUTO_OFFSET_RESET_CONFIG` is `latest` and it won’t run in a transaction even if there is a transaction manager present.
See the javadocs for `ContainerProperties.AssignmentCommitOption` for more information about the available options. | -|[]()[`authExceptionRetryInterval`](#authExceptionRetryInterval)| `null` | When not null, a `Duration` to sleep between polls when an `AuthenticationException` or `AuthorizationException` is thrown by the Kafka client.
When null, such exceptions are considered fatal and the container will stop. | -| []()[`clientId`](#clientId) | (empty string) | A prefix for the `client.id` consumer property.
Overrides the consumer factory `client.id` property; in a concurrent container, `-n` is added as a suffix for each consumer instance. | -| []()[`checkDeserExWhenKeyNull`](#checkDeserExWhenKeyNull) | false | Set to `true` to always check for a `DeserializationException` header when a `null` `key` is received.
Useful when the consumer code cannot determine that an `ErrorHandlingDeserializer` has been configured, such as when using a delegating deserializer. | -| []()[`checkDeserExWhenValueNull`](#checkDeserExWhenValueNull) | false | Set to `true` to always check for a `DeserializationException` header when a `null` `value` is received.
Useful when the consumer code cannot determine that an `ErrorHandlingDeserializer` has been configured, such as when using a delegating deserializer. | -| []()[`commitCallback`](#commitCallback) | `null` | When present and `syncCommits` is `false` a callback invoked after the commit completes. | -| []()[`commitLogLevel`](#commitLogLevel) | DEBUG | The logging level for logs pertaining to committing offsets. | -| []()[`consumerRebalanceListener`](#consumerRebalanceListener) | `null` | A rebalance listener; see [Rebalancing Listeners](#rebalance-listeners). | -| []()[`consumerStartTimout`](#consumerStartTimout) | 30s | The time to wait for the consumer to start before logging an error; this might happen if, say, you use a task executor with insufficient threads. | -| []()[`consumerTaskExecutor`](#consumerTaskExecutor) |`SimpleAsyncTaskExecutor`| A task executor to run the consumer threads.
The default executor creates threads named `-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container. | -| []()[`deliveryAttemptHeader`](#deliveryAttemptHeader) | `false` | See [Delivery Attempts Header](#delivery-header). | -| []()[`eosMode`](#eosMode) | `V2` | Exactly Once Semantics mode; see [Exactly Once Semantics](#exactly-once). | -| []()[`fixTxOffsets`](#fixTxOffsets) | `false` |When consuming records produced by a transactional producer, and the consumer is positioned at the end of a partition, the lag can incorrectly be reported as greater than zero, due to the pseudo record used to indicate transaction commit/rollback and, possibly, the presence of rolled-back records.
This does not functionally affect the consumer but some users have expressed concern that the "lag" is non-zero.
Set this property to `true` and the container will correct such mis-reported offsets.
The check is performed before the next poll to avoid adding significant complexity to the commit processing.
At the time of writing, the lag will only be corrected if the consumer is configured with `isolation.level=read_committed` and `max.poll.records` is greater than 1.
See [KAFKA-10683](https://issues.apache.org/jira/browse/KAFKA-10683) for more information.| -| []()[`groupId`](#groupId) | `null` | Overrides the consumer `group.id` property; automatically set by the `@KafkaListener` `id` or `groupId` property. | -| []()[`idleBeforeDataMultiplier`](#idleBeforeDataMultiplier) | 5.0 | Multiplier for `idleEventInterval` that is applied before any records are received.
After a record is received, the multiplier is no longer applied.
Available since version 2.8. | -| []()[`idleBetweenPolls`](#idleBetweenPolls) | 0 | Used to slow down deliveries by sleeping the thread between polls.
The time to process a batch of records plus this value must be less than the `max.poll.interval.ms` consumer property. | -| []()[`idleEventInterval`](#idleEventInterval) | `null` | When set, enables publication of `ListenerContainerIdleEvent` s, see [Application Events](#events) and [Detecting Idle and Non-Responsive Consumers](#idle-containers).
Also see `idleBeforeDataMultiplier`. | -|[]()[`idlePartitionEventInterval`](#idlePartitionEventInterval)| `null` | When set, enables publication of `ListenerContainerIdlePartitionEvent` s, see [Application Events](#events) and [Detecting Idle and Non-Responsive Consumers](#idle-containers). | -| []()[`kafkaConsumerProperties`](#kafkaConsumerProperties) | None | Used to override any arbitrary consumer properties configured on the consumer factory. | -| []()[`logContainerConfig`](#logContainerConfig) | `false` | Set to true to log at INFO level all container properties. | -| []()[`messageListener`](#messageListener) | `null` | The message listener. | -| []()[`micrometerEnabled`](#micrometerEnabled) | `true` | Whether or not to maintain Micrometer timers for the consumer threads. | -| []()[`missingTopicsFatal`](#missingTopicsFatal) | `false` | When true prevents the container from starting if the confifgured topic(s) are not present on the broker. | -| []()[`monitorInterval`](#monitorInterval) | 30s | How often to check the state of the consumer threads for `NonResponsiveConsumerEvent` s.
See `noPollThreshold` and `pollTimeout`. | -| []()[`noPollThreshold`](#noPollThreshold) | 3.0 | Multiplied by `pollTimeOut` to determine whether to publish a `NonResponsiveConsumerEvent`.
See `monitorInterval`. | -| []()[`onlyLogRecordMetadata`](#onlyLogRecordMetadata) | `false` | Set to false to log the complete consumer record (in error, debug logs etc) instead of just `[[email protected]](/cdn-cgi/l/email-protection)`. | -| []()[`pollTimeout`](#pollTimeout) | 5000 | The timeout passed into `Consumer.poll()`. | -| []()[`scheduler`](#scheduler) |`ThreadPoolTaskScheduler`| A scheduler on which to run the consumer monitor task. | -| []()[`shutdownTimeout`](#shutdownTimeout) | 10000 | The maximum time in ms to block the `stop()` method until all consumers stop and before publishing the container stopped event. | -| []()[`stopContainerWhenFenced`](#stopContainerWhenFenced) | `false` | Stop the listener container if a `ProducerFencedException` is thrown.
See [After-rollback Processor](#after-rollback) for more information. | -| []()[`stopImmediate`](#stopImmediate) | `false` | When the container is stopped, stop processing after the current record instead of after processing all the records from the previous poll. | -| []()[`subBatchPerPartition`](#subBatchPerPartition) | See desc. | When using a batch listener, if this is `true`, the listener is called with the results of the poll split into sub batches, one per partition.
Default `false` except when using transactions with `EOSMode.ALPHA` - see [Exactly Once Semantics](#exactly-once). | -| []()[`syncCommitTimeout`](#syncCommitTimeout) | `null` | The timeout to use when `syncCommits` is `true`.
When not set, the container will attempt to determine the `default.api.timeout.ms` consumer property and use that; otherwise it will use 60 seconds. | -| []()[`syncCommits`](#syncCommits) | `true` | Whether to use sync or async commits for offsets; see `commitCallback`. | -| []()[`topics` `topicPattern` `topicPartitions`](#topics) | n/a | The configured topics, topic pattern or explicitly assigned topics/partitions.
Mutually exclusive; at least one must be provided; enforced by `ContainerProperties` constructors. | -| []()[`transactionManager`](#transactionManager) | `null` | See [Transactions](#transactions). | +| | 1 | The number of records before committing pending offsets when the `ackMode` is `COUNT` or `COUNT_TIME`. | +| wrapping the message listener, invoked in order. | +| . | +| `] | +| | 5000 | The time in milliseconds after which pending offsets are committed when the `ackMode` is `TIME` or `COUNT_TIME`. | +| | LATEST\_ONLY \_NO\_TX | Whether or not to commit the initial position on assignment; by default, the initial offset will only be committed if the `ConsumerConfig.AUTO_OFFSET_RESET_CONFIG` is `latest` and it won’t run in a transaction even if there is a transaction manager present.
See the javadocs for `ContainerProperties.AssignmentCommitOption` for more information about the available options. | +|| `null` | When not null, a `Duration` to sleep between polls when an `AuthenticationException` or `AuthorizationException` is thrown by the Kafka client.
When null, such exceptions are considered fatal and the container will stop. | +| | A prefix for the `client.id` consumer property.
Overrides the consumer factory `client.id` property; in a concurrent container, `-n` is added as a suffix for each consumer instance. | +| | false | Set to `true` to always check for a `DeserializationException` header when a `null` `key` is received.
Useful when the consumer code cannot determine that an `ErrorHandlingDeserializer` has been configured, such as when using a delegating deserializer. | +| | false | Set to `true` to always check for a `DeserializationException` header when a `null` `value` is received.
Useful when the consumer code cannot determine that an `ErrorHandlingDeserializer` has been configured, such as when using a delegating deserializer. | +| | `null` | When present and `syncCommits` is `false` a callback invoked after the commit completes. | +| | DEBUG | The logging level for logs pertaining to committing offsets. | +| . | +| | 30s | The time to wait for the consumer to start before logging an error; this might happen if, say, you use a task executor with insufficient threads. | +| |`SimpleAsyncTaskExecutor`| A task executor to run the consumer threads.
The default executor creates threads named `-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container. | +| . | +| . | +| for more information.| +| | `null` | Overrides the consumer `group.id` property; automatically set by the `@KafkaListener` `id` or `groupId` property. | +| | 5.0 | Multiplier for `idleEventInterval` that is applied before any records are received.
After a record is received, the multiplier is no longer applied.
Available since version 2.8. | +| | 0 | Used to slow down deliveries by sleeping the thread between polls.
The time to process a batch of records plus this value must be less than the `max.poll.interval.ms` consumer property. | +| .
Also see `idleBeforeDataMultiplier`. | +|. | +| | None | Used to override any arbitrary consumer properties configured on the consumer factory. | +| | `false` | Set to true to log at INFO level all container properties. | +| | `null` | The message listener. | +| | `true` | Whether or not to maintain Micrometer timers for the consumer threads. | +| are not present on the broker. | +| | 30s | How often to check the state of the consumer threads for `NonResponsiveConsumerEvent` s.
See `noPollThreshold` and `pollTimeout`. | +| | 3.0 | Multiplied by `pollTimeOut` to determine whether to publish a `NonResponsiveConsumerEvent`.
See `monitorInterval`. | +| `. | +| `. | +| |`ThreadPoolTaskScheduler`| A scheduler on which to run the consumer monitor task. | +| ` method until all consumers stop and before publishing the container stopped event. | +| for more information. | +| | `false` | When the container is stopped, stop processing after the current record instead of after processing all the records from the previous poll. | +| . | +| | `null` | The timeout to use when `syncCommits` is `true`.
When not set, the container will attempt to determine the `default.api.timeout.ms` consumer property and use that; otherwise it will use 60 seconds. | +| | `true` | Whether to use sync or async commits for offsets; see `commitCallback`. | +| | n/a | The configured topics, topic pattern or explicitly assigned topics/partitions.
Mutually exclusive; at least one must be provided; enforced by `ContainerProperties` constructors. | +| . | | Property | Default | Description | |-------------------------------------------------------------|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| []()[`afterRollbackProcessor`](#afterRollbackProcessor) |`DefaultAfterRollbackProcessor`| An `AfterRollbackProcessor` to invoke after a transaction is rolled back. | -|[]()[`applicationEventPublisher`](#applicationEventPublisher)| application context | The event publisher. | -| []()[`batchErrorHandler`](#batchErrorHandler) | See desc. | Deprecated - see `commonErrorHandler`. | -| []()[`batchInterceptor`](#batchInterceptor) | `null` | Set a `BatchInterceptor` to call before invoking the batch listener; does not apply to record listeners.
Also see `interceptBeforeTx`. | -| []()[`beanName`](#beanName) | bean name | The bean name of the container; suffixed with `-n` for child containers. | -| []()[`commonErrorHandler`](#commonErrorHandler) | See desc. |`DefaultErrorHandler` or `null` when a `transactionManager` is provided when a `DefaultAfterRollbackProcessor` is used.
See [Container Error Handlers](#error-handlers).| -| []()[`containerProperties`](#containerProperties) | `ContainerProperties` | The container properties instance. | -| []()[`errorHandler`](#errorHandler) | See desc. | Deprecated - see `commonErrorHandler`. | -| []()[`genericErrorHandler`](#genericErrorHandler) | See desc. | Deprecated - see `commonErrorHandler`. | -| []()[`groupId`](#groupId) | See desc. | The `containerProperties.groupId`, if present, otherwise the `group.id` property from the consumer factory. | -| []()[`interceptBeforeTx`](#interceptBeforeTx) | `true` | Determines whether the `recordInterceptor` is called before or after a transaction starts. | -| []()[`listenerId`](#listenerId) | See desc. | The bean name for user-configured containers or the `id` attribute of `@KafkaListener` s. | -| []()[`pauseRequested`](#pauseRequested) | (read only) | True if a consumer pause has been requested. | -| []()[`recordInterceptor`](#recordInterceptor) | `null` | Set a `RecordInterceptor` to call before invoking the record listener; does not apply to batch listeners.
Also see `interceptBeforeTx`. | -| []()[`topicCheckTimeout`](#topicCheckTimeout) | 30s | When the `missingTopicsFatal` container property is `true`, how long to wait, in seconds, for the `describeTopics` operation to complete. | +| |`DefaultAfterRollbackProcessor`| An `AfterRollbackProcessor` to invoke after a transaction is rolled back. | +|| application context | The event publisher. | +| | See desc. | Deprecated - see `commonErrorHandler`. | +| | `null` | Set a `BatchInterceptor` to call before invoking the batch listener; does not apply to record listeners.
Also see `interceptBeforeTx`. | +| | bean name | The bean name of the container; suffixed with `-n` for child containers. | +| .| +| | `ContainerProperties` | The container properties instance. | +| | See desc. | Deprecated - see `commonErrorHandler`. | +| | See desc. | Deprecated - see `commonErrorHandler`. | +| | See desc. | The `containerProperties.groupId`, if present, otherwise the `group.id` property from the consumer factory. | +| | `true` | Determines whether the `recordInterceptor` is called before or after a transaction starts. | +| | See desc. | The bean name for user-configured containers or the `id` attribute of `@KafkaListener` s. | +| | True if a consumer pause has been requested. | +| | `null` | Set a `RecordInterceptor` to call before invoking the record listener; does not apply to batch listeners.
Also see `interceptBeforeTx`. | +| | 30s | When the `missingTopicsFatal` container property is `true`, how long to wait, in seconds, for the `describeTopics` operation to complete. | | Property | Default | Description | |-------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------| -| []()[`assignedPartitions`](#assignedPartitions) |(read only)| The partitions currently assigned to this container (explicitly or not). | -|[]()[`assignedPartitionsByClientId`](#assignedPartitionsByClientId)|(read only)| The partitions currently assigned to this container (explicitly or not). | -| []()[`clientIdSuffix`](#clientIdSuffix) | `null` |Used by the concurrent container to give each child container’s consumer a unique `client.id`.| -| []()[`containerPaused`](#containerPaused) | n/a | True if pause has been requested and the consumer has actually paused. | +| . | +|. | +| | `null` |Used by the concurrent container to give each child container’s consumer a unique `client.id`.| +| | n/a | True if pause has been requested and the consumer has actually paused. | | Property | Default | Description | |-------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| []()[`alwaysClientIdSuffix`](#alwaysClientIdSuffix) | `true` | Set to false to suppress adding a suffix to the `client.id` consumer property, when the `concurrency` is only 1. | -| []()[`assignedPartitions`](#assignedPartitions) |(read only)| The aggregate of partitions currently assigned to this container’s child `KafkaMessageListenerContainer` s (explicitly or not). | -|[]()[`assignedPartitionsByClientId`](#assignedPartitionsByClientId)|(read only)|The partitions currently assigned to this container’s child `KafkaMessageListenerContainer` s (explicitly or not), keyed by the child container’s consumer’s `client.id` property.| -| []()[`concurrency`](#concurrency) | 1 | The number of child `KafkaMessageListenerContainer` s to manage. | -| []()[`containerPaused`](#containerPaused) | n/a | True if pause has been requested and all child containers' consumer has actually paused. | -| []()[`containers`](#containers) | n/a | A reference to all child `KafkaMessageListenerContainer` s. | +| | `true` | Set to false to suppress adding a suffix to the `client.id` consumer property, when the `concurrency` is only 1. | +| . | +|, keyed by the child container’s consumer’s `client.id` property.| +| | 1 | The number of child `KafkaMessageListenerContainer` s to manage. | +| | n/a | True if pause has been requested and all child containers' consumer has actually paused. | +| | n/a | A reference to all child `KafkaMessageListenerContainer` s. | -#### [](#events)4.1.6. Application Events +#### 4.1.6. Application Events The following Spring application events are published by listener containers and their consumers: @@ -2898,7 +2898,7 @@ if (event.getReason.equals(Reason.FENCED)) { } ``` -##### [](#idle-containers)Detecting Idle and Non-Responsive Consumers +##### Detecting Idle and Non-Responsive Consumers While efficient, one problem with asynchronous consumers is detecting when they are idle. You might want to take some action if no messages arrive for some period of time. @@ -2946,7 +2946,7 @@ Receiving such an event lets you stop the containers, thus waking the consumer s Starting with version 2.6.2, if a container has published a `ListenerContainerIdleEvent`, it will publish a `ListenerContainerNoLongerIdleEvent` when a record is subsequently received. -##### [](#event-consumption)Event Consumption +##### Event Consumption You can capture these events by implementing `ApplicationListener` — either a general listener or one narrowed to only receive this specific event. You can also use `@EventListener`, introduced in Spring Framework 4.2. @@ -2983,12 +2983,12 @@ public class Listener { | |If you wish to use the idle event to stop the lister container, you should not call `container.stop()` on the thread that calls the listener.
Doing so causes delays and unnecessary log messages.
Instead, you should hand off the event to a different thread that can then stop the container.
Also, you should not `stop()` the container instance if it is a child container.
You should stop the concurrent container instead.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#current-positions-when-idle)Current Positions when Idle +###### Current Positions when Idle Note that you can obtain the current positions when idle is detected by implementing `ConsumerSeekAware` in your listener. See `onIdleContainer()` in [Seeking to a Specific Offset](#seek). -#### [](#topicpartition-initial-offset)4.1.7. Topic/Partition Initial Offset +#### 4.1.7. Topic/Partition Initial Offset There are several ways to set the initial offset for a partition. @@ -3002,7 +3002,7 @@ When you use group management where the broker assigns partitions: * For an existing group ID, the initial offset is the current offset for that group ID. You can, however, seek to a specific offset during initialization (or at any time thereafter). -#### [](#seek)4.1.8. Seeking to a Specific Offset +#### 4.1.8. Seeking to a Specific Offset In order to seek, your listener must implement `ConsumerSeekAware`, which has the following methods: @@ -3227,7 +3227,7 @@ public class SomeOtherBean { } ``` -#### [](#container-factory)4.1.9. Container factory +#### 4.1.9. Container factory As discussed in [`@KafkaListener` Annotation](#kafka-listener-annotation), a `ConcurrentKafkaListenerContainerFactory` is used to create containers for annotated methods. @@ -3264,7 +3264,7 @@ public KafkaListenerContainerFactory kafkaListenerContainerFactory() { } ``` -#### [](#thread-safety)4.1.10. Thread Safety +#### 4.1.10. Thread Safety When using a concurrent message listener container, a single listener instance is invoked on all consumer threads. Listeners, therefore, need to be thread-safe, and it is preferable to use stateless listeners. @@ -3283,9 +3283,9 @@ Note that `SimpleThreadScope` does not destroy beans that have a destruction int | |By default, the application context’s event multicaster invokes event listeners on the calling thread.
If you change the multicaster to use an async executor, thread cleanup is not effective.| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#micrometer)4.1.11. Monitoring +#### 4.1.11. Monitoring -##### [](#monitoring-listener-performance)Monitoring Listener Performance +##### Monitoring Listener Performance Starting with version 2.3, the listener container will automatically create and update Micrometer `Timer` s for the listener, if `Micrometer` is detected on the class path, and a single `MeterRegistry` is present in the application context. The timers can be disabled by setting the `ContainerProperty` `micrometerEnabled` to `false`. @@ -3305,7 +3305,7 @@ You can add additional tags using the `ContainerProperties` `micrometerTags` pro | |With the concurrent container, timers are created for each thread and the `name` tag is suffixed with `-n` where n is `0` to `concurrency-1`.| |---|---------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#monitoring-kafkatemplate-performance)Monitoring KafkaTemplate Performance +##### Monitoring KafkaTemplate Performance Starting with version 2.5, the template will automatically create and update Micrometer `Timer` s for send operations, if `Micrometer` is detected on the class path, and a single `MeterRegistry` is present in the application context. The timers can be disabled by setting the template’s `micrometerEnabled` property to `false`. @@ -3322,7 +3322,7 @@ The timers are named `spring.kafka.template` and have the following tags: You can add additional tags using the template’s `micrometerTags` property. -##### [](#micrometer-native)Micrometer Native Metrics +##### Micrometer Native Metrics Starting with version 2.5, the framework provides [Factory Listeners](#factory-listeners) to manage a Micrometer `KafkaClientMetrics` instance whenever producers and consumers are created and closed. @@ -3369,11 +3369,11 @@ double count = this.meterRegistry.get("kafka.producer.node.incoming.byte.total") A similar listener is provided for the `StreamsBuilderFactoryBean` - see [KafkaStreams Micrometer Support](#streams-micrometer). -#### [](#transactions)4.1.12. Transactions +#### 4.1.12. Transactions This section describes how Spring for Apache Kafka supports transactions. -##### [](#overview-2)Overview +##### Overview The 0.11.0.0 client library added support for transactions. Spring for Apache Kafka adds support in the following ways: @@ -3405,7 +3405,7 @@ With Spring Boot, it is only necessary to set the `spring.kafka.producer.transac | |Starting with version 2.5.8, you can now configure the `maxAge` property on the producer factory.
This is useful when using transactional producers that might lay idle for the broker’s `transactional.id.expiration.ms`.
With current `kafka-clients`, this can cause a `ProducerFencedException` without a rebalance.
By setting the `maxAge` to less than `transactional.id.expiration.ms`, the factory will refresh the producer if it is past it’s max age.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#using-kafkatransactionmanager)Using `KafkaTransactionManager` +##### Using `KafkaTransactionManager` The `KafkaTransactionManager` is an implementation of Spring Framework’s `PlatformTransactionManager`. It is provided with a reference to the producer factory in its constructor. @@ -3417,7 +3417,7 @@ If a transaction is active, any `KafkaTemplate` operations performed within the The manager commits or rolls back the transaction, depending on success or failure. You must configure the `KafkaTemplate` to use the same `ProducerFactory` as the transaction manager. -##### [](#transaction-synchronization)Transaction Synchronization +##### Transaction Synchronization This section refers to producer-only transactions (transactions not started by a listener container); see [Using Consumer-Initiated Transactions](#container-transaction-manager) for information about chaining transactions when the container starts the transaction. @@ -3440,14 +3440,14 @@ See [[ex-jdbc-sync]](#ex-jdbc-sync) for examples of an application that synchron | |Starting with versions 2.5.17, 2.6.12, 2.7.9 and 2.8.0, if the commit fails on the synchronized transaction (after the primary transaction has committed), the exception will be thrown to the caller.
Previously, this was silently ignored (logged at debug).
Applications should take remedial action, if necessary, to compensate for the committed primary transaction.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#container-transaction-manager)Using Consumer-Initiated Transactions +##### Using Consumer-Initiated Transactions The `ChainedKafkaTransactionManager` is now deprecated, since version 2.7; see the javadocs for its super class `ChainedTransactionManager` for more information. Instead, use a `KafkaTransactionManager` in the container to start the Kafka transaction and annotate the listener method with `@Transactional` to start the other transaction. See [[ex-jdbc-sync]](#ex-jdbc-sync) for an example application that chains JDBC and Kafka transactions. -##### [](#kafkatemplate-local-transactions)`KafkaTemplate` Local Transactions +##### `KafkaTemplate` Local Transactions You can use the `KafkaTemplate` to execute a series of operations within a local transaction. The following example shows how to do so: @@ -3467,7 +3467,7 @@ If an exception is thrown, the transaction is rolled back. | |If there is a `KafkaTransactionManager` (or synchronized) transaction in process, it is not used.
Instead, a new "nested" transaction is used.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#transaction-id-prefix)`transactionIdPrefix` +##### `transactionIdPrefix` As mentioned in [the overview](#transactions), the producer factory is configured with this property to build the producer `transactional.id` property. There is a dichotomy when specifying this property in that, when running multiple instances of the application with `EOSMode.ALPHA`, it must be the same on all instances to satisfy fencing zombies (also mentioned in the overview) when producing records on a listener container thread. @@ -3485,7 +3485,7 @@ This property must have a different value on each application instance. This problem (different rules for `transactional.id`) has been eliminated when `EOSMode.BETA` is being used (with broker versions \>= 2.5); see [Exactly Once Semantics](#exactly-once). -##### [](#tx-template-mixed)`KafkaTemplate` Transactional and non-Transactional Publishing +##### `KafkaTemplate` Transactional and non-Transactional Publishing Normally, when a `KafkaTemplate` is transactional (configured with a transaction-capable producer factory), transactions are required. The transaction can be started by a `TransactionTemplate`, a `@Transactional` method, calling `executeInTransaction`, or by a listener container, when configured with a `KafkaTransactionManager`. @@ -3494,7 +3494,7 @@ Starting with version 2.4.3, you can set the template’s `allowNonTransactional In that case, the template will allow the operation to run without a transaction, by calling the `ProducerFactory` 's `createNonTransactionalProducer()` method; the producer will be cached, or thread-bound, as normal for reuse. See [Using `DefaultKafkaProducerFactory`](#producer-factory). -##### [](#transactions-batch)Transactions with Batch Listeners +##### Transactions with Batch Listeners When a listener fails while transactions are being used, the `AfterRollbackProcessor` is invoked to take some action after the rollback occurs. When using the default `AfterRollbackProcessor` with a record listener, seeks are performed so that the failed record will be redelivered. @@ -3552,7 +3552,7 @@ public static class Config { } ``` -#### [](#exactly-once)4.1.13. Exactly Once Semantics +#### 4.1.13. Exactly Once Semantics You can provide a listener container with a `KafkaAwareTransactionManager` instance. When so configured, the container starts a transaction before invoking the listener. @@ -3602,7 +3602,7 @@ Refer to [KIP-447](https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+ `V1` and `V2` were previously `ALPHA` and `BETA`; they have been changed to align the framework with [KIP-732](https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2). -#### [](#interceptors)4.1.14. Wiring Spring Beans into Producer/Consumer Interceptors +#### 4.1.14. Wiring Spring Beans into Producer/Consumer Interceptors Apache Kafka provides a mechanism to add interceptors to producers and consumers. These objects are managed by Kafka, not Spring, and so normal Spring dependency injection won’t work for wiring in dependent Spring Beans. @@ -3737,7 +3737,7 @@ consumer interceptor in my foo bean Received test ``` -#### [](#pause-resume)4.1.15. Pausing and Resuming Listener Containers +#### 4.1.15. Pausing and Resuming Listener Containers Version 2.1.3 added `pause()` and `resume()` methods to listener containers. Previously, you could pause a consumer within a `ConsumerAwareMessageListener` and resume it by listening for a `ListenerContainerIdleEvent`, which provides access to the `Consumer` object. @@ -3812,7 +3812,7 @@ ConsumerResumedEvent [partitions=[pause.resume.topic-1, pause.resume.topic-0]] thing2 ``` -#### [](#pause-resume-partitions)4.1.16. Pausing and Resuming Partitions on Listener Containers +#### 4.1.16. Pausing and Resuming Partitions on Listener Containers Since version 2.7 you can pause and resume the consumption of specific partitions assigned to that consumer by using the `pausePartition(TopicPartition topicPartition)` and `resumePartition(TopicPartition topicPartition)` methods in the listener containers. The pausing and resuming takes place respectively before and after the `poll()` similar to the `pause()` and `resume()` methods. @@ -3821,9 +3821,9 @@ The `isPartitionPaused()` method returns true if that partition has effectively Also since version 2.7 `ConsumerPartitionPausedEvent` and `ConsumerPartitionResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instance. -#### [](#serdes)4.1.17. Serialization, Deserialization, and Message Conversion +#### 4.1.17. Serialization, Deserialization, and Message Conversion -##### [](#overview-3)Overview +##### Overview Apache Kafka provides a high-level API for serializing and deserializing record values as well as their keys. It is present with the `org.apache.kafka.common.serialization.Serializer` and`org.apache.kafka.common.serialization.Deserializer` abstractions with some built-in implementations. @@ -3844,7 +3844,7 @@ constructors to accept `Serializer` and `Deserializer` instances for `keys` and When you use this API, the `DefaultKafkaProducerFactory` and `DefaultKafkaConsumerFactory` also provide properties (through constructors or setter methods) to inject custom `Serializer` and `Deserializer` instances into the target `Producer` or `Consumer`. Also, you can pass in `Supplier` or `Supplier` instances through constructors - these `Supplier` s are called on creation of each `Producer` or `Consumer`. -##### [](#string-serde)String serialization +##### String serialization Since version 2.5, Spring for Apache Kafka provides `ToStringSerializer` and `ParseStringDeserializer` classes that use String representation of entities. They rely on methods `toString` and some `Function` or `BiFunction` to parse the String and populate properties of an instance. @@ -3889,7 +3889,7 @@ The method must be static and have a signature of either `(String, Headers)` or A `ToFromStringSerde` is also provided, for use with Kafka Streams. -##### [](#json-serde)JSON +##### JSON Spring for Apache Kafka also provides `JsonSerializer` and `JsonDeserializer` implementations that are based on the Jackson JSON object mapper. @@ -3916,7 +3916,7 @@ Starting with version 2.1, you can convey type information in record `Headers`, In addition, you can configure the serializer and deserializer by using the following Kafka properties. They have no effect if you have provided `Serializer` and `Deserializer` instances for `KafkaConsumer` and `KafkaProducer`, respectively. -###### [](#serdes-json-config)Configuration Properties +###### Configuration Properties * `JsonSerializer.ADD_TYPE_INFO_HEADERS` (default `true`): You can set it to `false` to disable this feature on the `JsonSerializer` (sets the `addTypeInfo` property). @@ -3946,7 +3946,7 @@ See also [[tip-json]](#tip-json). | |Starting with version 2.8, if you construct the serializer or deserializer programmatically as shown in [Programmatic Construction](#prog-json), the above properties will be applied by the factories, as long as you have not set any properties explicitly (using `set*()` methods or using the fluent API).
Previously, when creating programmatically, the configuration properties were never applied; this is still the case if you explicitly set properties on the object directly.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#serdes-mapping-types)Mapping Types +###### Mapping Types Starting with version 2.2, when using JSON, you can now provide type mappings by using the properties in the preceding list. Previously, you had to customize the type mapper within the serializer and deserializer. @@ -3986,7 +3986,7 @@ DefaultKafkaConsumerFactory cf = new DefaultKafkaConsumerFactory< new IntegerDeserializer(), new JsonDeserializer<>(Cat1.class, false)); ``` -###### [](#serdes-type-methods)Using Methods to Determine Types +###### Using Methods to Determine Types Starting with version 2.5, you can now configure the deserializer, via properties, to invoke a method to determine the target type. If present, this will override any of the other techniques discussed above. @@ -4034,7 +4034,7 @@ public static JavaType thing1Thing2JavaTypeForTopic(String topic, byte[] data, H } ``` -###### [](#prog-json)Programmatic Construction +###### Programmatic Construction When constructing the serializer/deserializer programmatically for use in the producer/consumer factory, since version 2.3, you can use the fluent API, which simplifies configuration. @@ -4080,9 +4080,9 @@ JsonDeserializer deser = new JsonDeserializer<>() Alternatively, as long as you don’t use the fluent API to configure properties, or set them using `set*()` methods, the factories will configure the serializer/deserializer using the configuration properties; see [Configuration Properties](#serdes-json-config). -##### [](#delegating-serialization)Delegating Serializer and Deserializer +##### Delegating Serializer and Deserializer -###### [](#using-headers)Using Headers +###### Using Headers Version 2.3 introduced the `DelegatingSerializer` and `DelegatingDeserializer`, which allow producing and consuming records with different key and/or value types. Producers must set a header `DelegatingSerializer.VALUE_SERIALIZATION_SELECTOR` to a selector value that is used to select which serializer to use for the value and `DelegatingSerializer.KEY_SERIALIZATION_SELECTOR` for the key; if a match is not found, an `IllegalStateException` is thrown. @@ -4115,7 +4115,7 @@ This technique supports sending different types to the same topic (or different For another technique to send different types to different topics, see [Using `RoutingKafkaTemplate`](#routing-template). -###### [](#by-type)By Type +###### By Type Version 2.8 introduced the `DelegatingByTypeSerializer`. @@ -4133,7 +4133,7 @@ public ProducerFactory producerFactory(Map conf Starting with version 2.8.3, you can configure the serializer to check if the map key is assignable from the target object, useful when a delegate serializer can serialize sub classes. In this case, if there are amiguous matches, an ordered `Map`, such as a `LinkedHashMap` should be provided. -###### [](#by-topic)By Topic +###### By Topic Starting with version 2.8, the `DelegatingByTopicSerializer` and `DelegatingByTopicDeserializer` allow selection of a serializer/deserializer based on the topic name. Regex `Pattern` s are used to lookup the instance to use. @@ -4167,7 +4167,7 @@ You can specify a default serializer/deserializer to use when there is no patter An additional property `DelegatingByTopicSerialization.CASE_SENSITIVE` (default `true`), when set to `false` makes the topic lookup case insensitive. -##### [](#retrying-deserialization)Retrying Deserializer +##### Retrying Deserializer The `RetryingDeserializer` uses a delegate `Deserializer` and `RetryTemplate` to retry deserialization when the delegate might have transient errors, such a network issues, during deserialization. @@ -4179,7 +4179,7 @@ ConsumerFactory cf = new DefaultKafkaConsumerFactory(myConsumerConfigs, Refer to the [spring-retry](https://github.com/spring-projects/spring-retry) project for configuration of the `RetryTemplate` with a retry policy, back off policy, etc. -##### [](#messaging-message-conversion)Spring Messaging Message Conversion +##### Spring Messaging Message Conversion Although the `Serializer` and `Deserializer` API is quite simple and flexible from the low-level Kafka `Consumer` and `Producer` perspective, you might need more flexibility at the Spring Messaging level, when using either `@KafkaListener` or [Spring Integration’s Apache Kafka Support](https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka). To let you easily convert to and from `org.springframework.messaging.Message`, Spring for Apache Kafka provides a `MessageConverter` abstraction with the `MessagingMessageConverter` implementation and its `JsonMessageConverter` (and subclasses) customization. @@ -4234,7 +4234,7 @@ public void smart(Thing thing) { } ``` -###### [](#data-projection)Using Spring Data Projection Interfaces +###### Using Spring Data Projection Interfaces Starting with version 2.1.1, you can convert JSON to a Spring Data Projection interface instead of a concrete type. This allows very selective, and low-coupled bindings to data, including the lookup of values from multiple places inside the JSON document. @@ -4265,7 +4265,7 @@ You must also add `spring-data:spring-data-commons` and `com.jayway.jsonpath:jso When used as the parameter to a `@KafkaListener` method, the interface type is automatically passed to the converter as normal. -##### [](#error-handling-deserializer)Using `ErrorHandlingDeserializer` +##### Using `ErrorHandlingDeserializer` When a deserializer fails to deserialize a message, Spring has no way to handle the problem, because it occurs before the `poll()` returns. To solve this problem, the `ErrorHandlingDeserializer` has been introduced. @@ -4396,7 +4396,7 @@ void listen(List> in) { } ``` -##### [](#payload-conversion-with-batch)Payload Conversion with Batch Listeners +##### Payload Conversion with Batch Listeners You can also use a `JsonMessageConverter` within a `BatchMessagingMessageConverter` to convert batch messages when you use a batch listener container factory. See [Serialization, Deserialization, and Message Conversion](#serdes) and [Spring Messaging Message Conversion](#messaging-message-conversion) for more information. @@ -4446,7 +4446,7 @@ public void listen1(List> fooMessages) { } ``` -##### [](#conversionservice-customization)`ConversionService` Customization +##### `ConversionService` Customization Starting with version 2.1.1, the `org.springframework.core.convert.ConversionService` used by the default `o.s.messaging.handler.annotation.support.MessageHandlerMethodFactory` to resolve parameters for the invocation of a listener method is supplied with all beans that implement any of the following interfaces: @@ -4461,7 +4461,7 @@ This lets you further customize listener deserialization without changing the de | |Setting a custom `MessageHandlerMethodFactory` on the `KafkaListenerEndpointRegistrar` through a `KafkaListenerConfigurer` bean disables this feature.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#custom-arg-resolve)Adding custom `HandlerMethodArgumentResolver` to `@KafkaListener` +##### Adding custom `HandlerMethodArgumentResolver` to `@KafkaListener` Starting with version 2.4.2 you are able to add your own `HandlerMethodArgumentResolver` and resolve custom method parameters. All you need is to implement `KafkaListenerConfigurer` and use method `setCustomMethodArgumentResolvers()` from class `KafkaListenerEndpointRegistrar`. @@ -4499,7 +4499,7 @@ If you are using a `DefaultMessageHandlerMethodFactory`, set this resolver as th See also [Null Payloads and Log Compaction of 'Tombstone' Records](#tombstones). -#### [](#headers)4.1.18. Message Headers +#### 4.1.18. Message Headers The 0.11.0.0 client introduced support for headers in messages. As of version 2.0, Spring for Apache Kafka now supports mapping these headers to and from `spring-messaging` `MessageHeaders`. @@ -4643,7 +4643,7 @@ MessagingMessageConverter converter() { If using Spring Boot, it will auto configure this converter bean into the auto-configured `KafkaTemplate`; otherwise you should add this converter to the template. -#### [](#tombstones)4.1.19. Null Payloads and Log Compaction of 'Tombstone' Records +#### 4.1.19. Null Payloads and Log Compaction of 'Tombstone' Records When you use [Log Compaction](https://kafka.apache.org/documentation/#compaction), you can send and receive messages with `null` payloads to identify the deletion of a key. @@ -4701,11 +4701,11 @@ Note that the argument is `null`, not `KafkaNull`. | |This feature requires the use of a `KafkaNullAwarePayloadArgumentResolver` which the framework will configure when using the default `MessageHandlerMethodFactory`.
When using a custom `MessageHandlerMethodFactory`, see [Adding custom `HandlerMethodArgumentResolver` to `@KafkaListener`](#custom-arg-resolve).| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#annotation-error-handling)4.1.20. Handling Exceptions +#### 4.1.20. Handling Exceptions This section describes how to handle various exceptions that may arise when you use Spring for Apache Kafka. -##### [](#listener-error-handlers)Listener Error Handlers +##### Listener Error Handlers Starting with version 2.0, the `@KafkaListener` annotation has a new attribute: `errorHandler`. @@ -4794,7 +4794,7 @@ This resets each topic/partition in the batch to the lowest offset in the batch. | |The preceding two examples are simplistic implementations, and you would probably want more checking in the error handler.| |---|--------------------------------------------------------------------------------------------------------------------------| -##### [](#error-handlers)Container Error Handlers +##### Container Error Handlers Starting with version 2.8, the legacy `ErrorHandler` and `BatchErrorHandler` interfaces have been superceded by a new `CommonErrorHandler`. These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener.`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated. @@ -4842,7 +4842,7 @@ The container commits any pending offset commits before calling the error handle If you are using Spring Boot, you simply need to add the error handler as a `@Bean` and Boot will add it to the auto-configured factory. -##### [](#default-eh)DefaultErrorHandler +##### DefaultErrorHandler This new error handler replaces the `SeekToCurrentErrorHandler` and `RecoveringBatchErrorHandler`, which have been the default error handlers for several releases now. One difference is that the fallback behavior for batch listeners (when an exception other than a `BatchListenerFailedException` is thrown) is the equivalent of the [Retrying Complete Batches](#retrying-batch-eh). @@ -4990,7 +4990,7 @@ By default, the exception type is not considered. Also see [Delivery Attempts Header](#delivery-header). -#### [](#batch-listener-conv-errors)4.1.21. Conversion Errors with Batch Error Handlers +#### 4.1.21. Conversion Errors with Batch Error Handlers Starting with version 2.8, batch listeners can now properly handle conversion errors, when using a `MessageConverter` with a `ByteArrayDeserializer`, a `BytesDeserializer` or a `StringDeserializer`, as well as a `DefaultErrorHandler`. When a conversion error occurs, the payload is set to null and a deserialization exception is added to the record headers, similar to the `ErrorHandlingDeserializer`. @@ -5011,7 +5011,7 @@ void listen(List in, @Header(KafkaHeaders.CONVERSION_FAILURES) ListIn other words, all streams defined by a `StreamsBuilder` are tied with a single lifecycle control.
Once a `KafkaStreams` instance has been closed by `streams.close()`, it cannot be restarted.
Instead, a new `KafkaStreams` instance to restart stream processing must be created.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#streams-spring)4.2.2. Spring Management +#### 4.2.2. Spring Management To simplify using Kafka Streams from the Spring application context perspective and use the lifecycle management through a container, the Spring for Apache Kafka introduces `StreamsBuilderFactoryBean`. This is an `AbstractFactoryBean` implementation to expose a `StreamsBuilder` singleton instance as a bean. @@ -5521,7 +5521,7 @@ Default no-op implementations are provided to avoid having to implement both met A `CompositeKafkaStreamsInfrastructureCustomizer` is provided, for when you need to apply multiple customizers. -#### [](#streams-micrometer)4.2.3. KafkaStreams Micrometer Support +#### 4.2.3. KafkaStreams Micrometer Support Introduced in version 2.5.3, you can configure a `KafkaStreamsMicrometerListener` to automatically register micrometer meters for the `KafkaStreams` object managed by the factory bean: @@ -5530,7 +5530,7 @@ streamsBuilderFactoryBean.addListener(new KafkaStreamsMicrometerListener(meterRe Collections.singletonList(new ImmutableTag("customTag", "customTagValue")))); ``` -#### [](#serde)4.2.4. Streams JSON Serialization and Deserialization +#### 4.2.4. Streams JSON Serialization and Deserialization For serializing and deserializing data when reading or writing to topics or state stores in JSON format, Spring for Apache Kafka provides a `JsonSerde` implementation that uses JSON, delegating to the `JsonSerializer` and `JsonDeserializer` described in [Serialization, Deserialization, and Message Conversion](#serdes). The `JsonSerde` implementation provides the same configuration options through its constructor (target type or `ObjectMapper`). @@ -5551,7 +5551,7 @@ stream.through(new JsonSerde<>(MyKeyType.class) "myTypes"); ``` -#### [](#using-kafkastreambrancher)4.2.5. Using `KafkaStreamBrancher` +#### 4.2.5. Using `KafkaStreamBrancher` The `KafkaStreamBrancher` class introduces a more convenient way to build conditional branches on top of `KStream`. @@ -5580,7 +5580,7 @@ new KafkaStreamBrancher() //onTopOf method returns the provided stream so we can continue with method chaining ``` -#### [](#streams-config)4.2.6. Configuration +#### 4.2.6. Configuration To configure the Kafka Streams environment, the `StreamsBuilderFactoryBean` requires a `KafkaStreamsConfiguration` instance. See the Apache Kafka [documentation](https://kafka.apache.org/0102/documentation/#streamsconfigs) for all possible options. @@ -5599,7 +5599,7 @@ By default, when the factory bean is stopped, the `KafkaStreams.cleanUp()` metho Starting with version 2.1.2, the factory bean has additional constructors, taking a `CleanupConfig` object that has properties to let you control whether the `cleanUp()` method is called during `start()` or `stop()` or neither. Starting with version 2.7, the default is to never clean up local state. -#### [](#streams-header-enricher)4.2.7. Header Enricher +#### 4.2.7. Header Enricher Version 2.3 added the `HeaderEnricher` implementation of `Transformer`. This can be used to add headers within the stream processing; the header values are SpEL expressions; the root object of the expression evaluation has 3 properties: @@ -5641,7 +5641,7 @@ stream .to(OUTPUT); ``` -#### [](#streams-messaging)4.2.8. `MessagingTransformer` +#### 4.2.8. `MessagingTransformer` Version 2.3 added the `MessagingTransformer` this allows a Kafka Streams topology to interact with a Spring Messaging component, such as a Spring Integration flow. The transformer requires an implementation of `MessagingFunction`. @@ -5659,7 +5659,7 @@ Spring Integration automatically provides an implementation using its `GatewayPr It also requires a `MessagingMessageConverter` to convert the key, value and metadata (including headers) to/from a Spring Messaging `Message`. See [[Calling a Spring Integration Flow from a `KStream`](https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#streams-integration)] for more information. -#### [](#streams-deser-recovery)4.2.9. Recovery from Deserialization Exceptions +#### 4.2.9. Recovery from Deserialization Exceptions Version 2.3 introduced the `RecoveringDeserializationExceptionHandler` which can take some action when a deserialization exception occurs. Refer to the Kafka documentation about `DeserializationExceptionHandler`, of which the `RecoveringDeserializationExceptionHandler` is an implementation. @@ -5690,7 +5690,7 @@ public DeadLetterPublishingRecoverer recoverer() { Of course, the `recoverer()` bean can be your own implementation of `ConsumerRecordRecoverer`. -#### [](#kafka-streams-example)4.2.10. Kafka Streams Example +#### 4.2.10. Kafka Streams Example The following example combines all the topics we have covered in this chapter: @@ -5740,16 +5740,16 @@ public static class KafkaStreamsConfig { } ``` -### [](#testing)4.3. Testing Applications +### 4.3. Testing Applications The `spring-kafka-test` jar contains some useful utilities to assist with testing your applications. -#### [](#ktu)4.3.1. KafkaTestUtils +#### 4.3.1. KafkaTestUtils `o.s.kafka.test.utils.KafkaTestUtils` provides a number of static helper methods to consume records, retrieve various record offsets, and others. Refer to its [Javadocs](https://docs.spring.io/spring-kafka/docs/current/api/org/springframework/kafka/test/utils/KafkaTestUtils.html) for complete details. -#### [](#junit)4.3.2. JUnit +#### 4.3.2. JUnit `o.s.kafka.test.utils.KafkaTestUtils` also provides some static methods to set up producer and consumer properties. The following listing shows those method signatures: @@ -5847,7 +5847,7 @@ Convenient constants (`EmbeddedKafkaBroker.SPRING_EMBEDDED_KAFKA_BROKERS` and `E With the `EmbeddedKafkaBroker.brokerProperties(Map)`, you can provide additional properties for the Kafka servers. See [Kafka Config](https://kafka.apache.org/documentation/#brokerconfigs) for more information about possible broker properties. -#### [](#configuring-topics-2)4.3.3. Configuring Topics +#### 4.3.3. Configuring Topics The following example configuration creates topics called `cat` and `hat` with five partitions, a topic called `thing1` with 10 partitions, and a topic called `thing2` with 15 partitions: @@ -5870,7 +5870,7 @@ public class MyTests { By default, `addTopics` will throw an exception when problems arise (such as adding a topic that already exists). Version 2.6 added a new version of that method that returns a `Map`; the key is the topic name and the value is `null` for success, or an `Exception` for a failure. -#### [](#using-the-same-brokers-for-multiple-test-classes)4.3.4. Using the Same Broker(s) for Multiple Test Classes +#### for Multiple Test Classes There is no built-in support for doing so, but you can use the same broker for multiple test classes with something similar to the following: @@ -5919,7 +5919,7 @@ If you are not using Spring Boot, you can obtain the bootstrap servers using `br | |The preceding example provides no mechanism for shutting down the broker(s) when all tests are complete.
This could be a problem if, say, you run your tests in a Gradle daemon.
You should not use this technique in such a situation, or you should use something to call `destroy()` on the `EmbeddedKafkaBroker` when your tests are complete.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#embedded-kafka-annotation)4.3.5. @EmbeddedKafka Annotation +#### 4.3.5. @EmbeddedKafka Annotation We generally recommend that you use the rule as a `@ClassRule` to avoid starting and stopping the broker between tests (and use a different topic for each test). Starting with version 2.0, if you use Spring’s test application context caching, you can also declare a `EmbeddedKafkaBroker` bean, so a single broker can be used across multiple test classes. @@ -5989,7 +5989,7 @@ Properties defined by `brokerProperties` override properties found in `brokerPro You can use the `@EmbeddedKafka` annotation with JUnit 4 or JUnit 5. -#### [](#embedded-kafka-junit5)4.3.6. @EmbeddedKafka Annotation with JUnit5 +#### 4.3.6. @EmbeddedKafka Annotation with JUnit5 Starting with version 2.3, there are two ways to use the `@EmbeddedKafka` annotation with JUnit5. When used with the `@SpringJunitConfig` annotation, the embedded broker is added to the test application context. @@ -6015,7 +6015,7 @@ A stand-alone (not Spring test context) broker will be created if the class anno | |When there is a Spring test application context available, the topics and broker properties can contain property placeholders, which will be resolved as long as the property is defined somewhere.
If there is no Spring context available, these placeholders won’t be resolved.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#embedded-broker-in-springboottest-annotations)4.3.7. Embedded Broker in `@SpringBootTest` Annotations +#### 4.3.7. Embedded Broker in `@SpringBootTest` Annotations [Spring Initializr](https://start.spring.io/) now automatically adds the `spring-kafka-test` dependency in test scope to the project configuration. @@ -6030,7 +6030,7 @@ They include: * [`@EmbeddedKafka` Annotation or `EmbeddedKafkaBroker` Bean](#kafka-testing-embeddedkafka-annotation) -##### [](#kafka-testing-junit4-class-rule)JUnit4 Class Rule +##### JUnit4 Class Rule The following example shows how to use a JUnit4 class rule to create an embedded broker: @@ -6058,7 +6058,7 @@ public class MyApplicationTests { Notice that, since this is a Spring Boot application, we override the broker list property to set Boot’s property. -##### [](#kafka-testing-embeddedkafka-annotation)`@EmbeddedKafka` Annotation or `EmbeddedKafkaBroker` Bean +##### `@EmbeddedKafka` Annotation or `EmbeddedKafkaBroker` Bean The following example shows how to use an `@EmbeddedKafka` Annotation to create an embedded broker: @@ -6079,7 +6079,7 @@ public class MyApplicationTests { } ``` -#### [](#hamcrest-matchers)4.3.8. Hamcrest Matchers +#### 4.3.8. Hamcrest Matchers The `o.s.kafka.test.hamcrest.KafkaMatchers` provides the following matchers: @@ -6126,7 +6126,7 @@ public static Matcher> hasTimestamp(TimestampType type, lon } ``` -#### [](#assertj-conditions)4.3.9. AssertJ Conditions +#### 4.3.9. AssertJ Conditions You can use the following AssertJ conditions: @@ -6179,7 +6179,7 @@ public static Condition> timestamp(TimestampType type, long } ``` -#### [](#example)4.3.10. Example +#### 4.3.10. Example The following example brings together most of the topics covered in this chapter: @@ -6254,7 +6254,7 @@ received = records.poll(10, TimeUnit.SECONDS); assertThat(received).has(allOf(keyValue(2, "baz"), partition(0))); ``` -### [](#retry-topic)4.4. Non-Blocking Retries +### 4.4. Non-Blocking Retries | |This is an experimental feature and the usual rule of no breaking API changes does not apply to this feature until the experimental designation is removed.
Users are encouraged to try out the feature and provide feedback via GitHub Issues or GitHub discussions.
This is regarding the API only; the feature is considered to be complete, and robust.| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -6262,7 +6262,7 @@ assertThat(received).has(allOf(keyValue(2, "baz"), partition(0))); Achieving non-blocking retry / dlt functionality with Kafka usually requires setting up extra topics and creating and configuring the corresponding listeners. Since 2.7 Spring for Apache Kafka offers support for that via the `@RetryableTopic` annotation and `RetryTopicConfiguration` class to simplify that bootstrapping. -#### [](#how-the-pattern-works)4.4.1. How The Pattern Works +#### 4.4.1. How The Pattern Works If message processing fails, the message is forwarded to a retry topic with a back off timestamp. The retry topic consumer then checks the timestamp and if it’s not due it pauses the consumption for that topic’s partition. @@ -6281,9 +6281,9 @@ The framework also takes care of creating the topics and setting up and configur | |At this time this functionality doesn’t support class level `@KafkaListener` annotations| |---|----------------------------------------------------------------------------------------| -#### [](#back-off-delay-precision)4.4.2. Back Off Delay Precision +#### 4.4.2. Back Off Delay Precision -##### [](#overview-and-guarantees)Overview and Guarantees +##### Overview and Guarantees All message processing and backing off is handled by the consumer thread, and, as such, delay precision is guaranteed on a best-effort basis. If one message’s processing takes longer than the next message’s back off period for that consumer, the next message’s delay will be higher than expected. @@ -6295,7 +6295,7 @@ That being said, for consumers handling a single partition the message’s proce | |It is guaranteed that a message will never be processed before its due time.| |---|----------------------------------------------------------------------------| -##### [](#tuning-the-delay-precision)Tuning the Delay Precision +##### Tuning the Delay Precision The message’s processing delay precision relies on two `ContainerProperties`: `ContainerProperties.pollTimeout` and `ContainerProperties.idlePartitionEventInterval`. Both properties will be automatically set in the retry topic and dlt’s `ListenerContainerFactory` to one quarter of the smallest delay value for that topic, with a minimum value of 250ms and a maximum value of 5000ms. @@ -6305,9 +6305,9 @@ This way you can tune the precision and performance for the retry topics if you | |You can have separate `ListenerContainerFactory` instances for the main and retry topics - this way you can have different settings to better suit your needs, such as having a higher polling timeout setting for the main topics and a lower one for the retry topics.| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#configuration)4.4.3. Configuration +#### 4.4.3. Configuration -##### [](#using-the-retryabletopic-annotation)Using the `@RetryableTopic` annotation +##### Using the `@RetryableTopic` annotation To configure the retry topic and dlt for a `@KafkaListener` annotated method, you just have to add the `@RetryableTopic` annotation to it and Spring for Apache Kafka will bootstrap all the necessary topics and consumers with the default configurations. @@ -6332,7 +6332,7 @@ public void processMessage(MyPojo message) { | |If you don’t specify a kafkaTemplate name a bean with name `retryTopicDefaultKafkaTemplate` will be looked up.
If no bean is found an exception is thrown.| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#using-retrytopicconfiguration-beans)Using `RetryTopicConfiguration` beans +##### Using `RetryTopicConfiguration` beans You can also configure the non-blocking retry support by creating `RetryTopicConfiguration` beans in a `@Configuration` annotated class. @@ -6392,11 +6392,11 @@ public KafkaTemplate kafkaTemplate() { } ``` -#### [](#features)4.4.4. Features +#### 4.4.4. Features Most of the features are available both for the `@RetryableTopic` annotation and the `RetryTopicConfiguration` beans. -##### [](#backoff-configuration)BackOff Configuration +##### BackOff Configuration The BackOff configuration relies on the `BackOffPolicy` interface from the `Spring Retry` project. @@ -6453,7 +6453,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |The first attempt counts against the maxAttempts, so if you provide a maxAttempts value of 4 there’ll be the original attempt plus 3 retries.| |---|---------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#single-topic-fixed-delay-retries)Single Topic Fixed Delay Retries +##### Single Topic Fixed Delay Retries If you’re using fixed delay policies such as `FixedBackOffPolicy` or `NoBackOffPolicy` you can use a single topic to accomplish the non-blocking retries. This topic will be suffixed with the provided or default suffix, and will not have either the index or the delay values appended. @@ -6481,7 +6481,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |The default behavior is creating separate retry topics for each attempt, appended with their index value: retry-0, retry-1, …​| |---|------------------------------------------------------------------------------------------------------------------------------| -##### [](#global-timeout)Global timeout +##### Global timeout You can set the global timeout for the retrying process. If that time is reached, the next time the consumer throws an exception the message goes straight to the DLT, or just ends the processing if no DLT is available. @@ -6508,7 +6508,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |The default is having no timeout set, which can also be achieved by providing -1 as the timout value.| |---|-----------------------------------------------------------------------------------------------------| -##### [](#retry-topic-ex-classifier)Exception Classifier +##### Exception Classifier You can specify which exceptions you want to retry on and which not to. You can also set it to traverse the causes to lookup nested exceptions. @@ -6553,7 +6553,7 @@ public DefaultDestinationTopicResolver topicResolver(ApplicationContext applicat | |To disable fatal exceptions' classification, clear the default list using the `setClassifications` method in `DefaultDestinationTopicResolver`.| |---|-----------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#include-and-exclude-topics)Include and Exclude Topics +##### Include and Exclude Topics You can decide which topics will and will not be handled by a `RetryTopicConfiguration` bean via the .includeTopic(String topic), .includeTopics(Collection\ topics) .excludeTopic(String topic) and .excludeTopics(Collection\ topics) methods. @@ -6578,7 +6578,7 @@ public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate | |The default behavior is to include all topics.| |---|----------------------------------------------| -##### [](#topics-autocreation)Topics AutoCreation +##### Topics AutoCreation Unless otherwise specified the framework will auto create the required topics using `NewTopic` beans that are consumed by the `KafkaAdmin` bean. You can specify the number of partitions and the replication factor with which the topics will be created, and you can turn this feature off. @@ -6621,7 +6621,7 @@ public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate | |By default the topics are autocreated with one partition and a replication factor of one.| |---|-----------------------------------------------------------------------------------------| -##### [](#retry-headers)Failure Header Management +##### Failure Header Management When considering how to manage failure headers (original headers and exception headers), the framework delegates to the `DeadLetterPublishingRecover` to decide whether to append or replace the headers. @@ -6646,7 +6646,7 @@ DeadLetterPublishingRecovererFactory factory(DestinationTopicResolver resolver) } ``` -#### [](#topic-naming)4.4.5. Topic Naming +#### 4.4.5. Topic Naming Retry topics and DLT are named by suffixing the main topic with a provided or default value, appended by either the delay or index for that topic. @@ -6656,7 +6656,7 @@ Examples: "my-other-topic" → "my-topic-myRetrySuffix-1000", "my-topic-myRetrySuffix-2000", …​, "my-topic-myDltSuffix". -##### [](#retry-topics-and-dlt-suffixes)Retry Topics and Dlt Suffixes +##### Retry Topics and Dlt Suffixes You can specify the suffixes that will be used by the retry and dlt topics. @@ -6682,7 +6682,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate t | |The default suffixes are "-retry" and "-dlt", for retry topics and dlt respectively.| |---|------------------------------------------------------------------------------------| -##### [](#appending-the-topics-index-or-delay)Appending the Topic’s Index or Delay +##### Appending the Topic’s Index or Delay You can either append the topic’s index or delay values after the suffix. @@ -6707,7 +6707,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |The default behavior is to suffix with the delay values, except for fixed delay configurations with multiple topics, in which case the topics are suffixed with the topic’s index.| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#custom-naming-strategies)Custom naming strategies +##### Custom naming strategies More complex naming strategies can be accomplished by registering a bean that implements `RetryTopicNamesProviderFactory`. The default implementation is `SuffixingRetryTopicNamesProviderFactory` and a different implementation can be registered in the following way: @@ -6745,11 +6745,11 @@ public class CustomRetryTopicNamesProviderFactory implements RetryTopicNamesProv } ``` -#### [](#dlt-strategies)4.4.6. Dlt Strategies +#### 4.4.6. Dlt Strategies The framework provides a few strategies for working with DLTs. You can provide a method for DLT processing, use the default logging method, or have no DLT at all. Also you can choose what happens if DLT processing fails. -##### [](#dlt-processing-method)Dlt Processing Method +##### Dlt Processing Method You can specify the method used to process the Dlt for the topic, as well as the behavior if that processing fails. @@ -6804,7 +6804,7 @@ When using the `@RetryableTopic` annotation, set the `autoStartDltHandler` prope You can later start the DLT handler via the `KafkaListenerEndpointRegistry`. -##### [](#dlt-failure-behavior)DLT Failure Behavior +##### DLT Failure Behavior Should the DLT processing fail, there are two possible behaviors available: `ALWAYS_RETRY_ON_ERROR` and `FAIL_ON_ERROR`. @@ -6855,7 +6855,7 @@ You can add exceptions to and remove exceptions from this list using methods on See [Exception Classifier](#retry-topic-ex-classifier) for more information. -##### [](#configuring-no-dlt)Configuring No DLT +##### Configuring No DLT The framework also provides the possibility of not configuring a DLT for the topic. In this case after retrials are exhausted the processing simply ends. @@ -6879,7 +6879,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templ } ``` -#### [](#retry-topic-lcf)4.4.7. Specifying a ListenerContainerFactory +#### 4.4.7. Specifying a ListenerContainerFactory By default the RetryTopic configuration will use the provided factory from the `@KafkaListener` annotation, but you can specify a different one to be used to create the retry topic and dlt listener containers. diff --git a/docs/spring-for-apache-kafka/spring-kafka.md b/docs/spring-for-apache-kafka/spring-kafka.md index 57b80995625645f9c9c21bbe3436ab375da3eca1..8947c1ecf69991ba7c3e9df4143ff0051e3d6c61 100644 --- a/docs/spring-for-apache-kafka/spring-kafka.md +++ b/docs/spring-for-apache-kafka/spring-kafka.md @@ -1,16 +1,16 @@ -# Spring 为 Apache 卡夫卡 +# Spring 为 Apache Kafka -## [](#preface)1。前言 +## 1.前言 Spring for Apache Kafka 项目将核心 Spring 概念应用于基于 Kafka 的消息传递解决方案的开发。我们提供了一个“模板”作为发送消息的高级抽象。我们还为消息驱动的 POJO 提供支持。 -## [](#whats-new-part)2。最新更新? +## 2.最新更新? -### [](#spring-kafka-intro-new)2.1。最新更新自 2.7 年以来的 2.8 年 +### 2.1.最新更新自 2.7 年以来的 2.8 年 本部分介绍了从 2.7 版本到 2.8 版本所做的更改。有关早期版本中的更改,请参见[[history]]。 -#### [](#x28-kafka-client)2.1.1。Kafka 客户端版本 +#### 2.1.1.Kafka 客户端版本 此版本需要 3.0.0`kafka-clients` @@ -19,7 +19,7 @@ Spring for Apache Kafka 项目将核心 Spring 概念应用于基于 Kafka 的 有关更多信息,请参见[一次语义学](#exactly-once)和[KIP-447](https://cwiki.apache.org/confluence/display/KAFKA/KIP-447%3A+Producer+scalability+for+exactly+once+semantics)。 -#### [](#x28-packages)2.1.2。软件包更改 +#### 2.1.2.软件包更改 与类型映射相关的类和接口已从`…​support.converter`移动到`…​support.mapping`。 @@ -31,11 +31,11 @@ Spring for Apache Kafka 项目将核心 Spring 概念应用于基于 Kafka 的 * `Jackson2爪哇TypeMapper` -#### [](#x28-ooo-commits)2.1.3。失效的手动提交 +#### 2.1.3.失效的手动提交 现在可以将侦听器容器配置为接受顺序错误的手动偏移提交(通常是异步的)。容器将推迟提交,直到确认丢失的偏移量。有关更多信息,请参见[手动提交偏移](#ooo-commits)。 -#### [](#x28-batch-overrude)2.1.4。`@KafkaListener`变化 +#### 2.1.4.`@KafkaListener`变化 现在可以在方法本身上指定侦听器方法是否为批处理侦听器。这允许对记录和批处理侦听器使用相同的容器工厂。 @@ -47,15 +47,15 @@ Spring for Apache Kafka 项目将核心 Spring 概念应用于基于 Kafka 的 `RecordFilterStrategy`在与批处理侦听器一起使用时,现在可以在一个调用中过滤整个批处理。有关更多信息,请参见[批处理侦听器](#batch-listeners)末尾的注释。 -#### [](#x28-template)2.1.5。`KafkaTemplate`变化 +#### 2.1.5.`KafkaTemplate`变化 给定主题、分区和偏移量,你现在可以接收一条记录。有关更多信息,请参见[使用`KafkaTemplate`接收]。 -#### [](#x28-eh)2.1.6。`CommonErrorHandler`已添加 +#### 2.1.6.`CommonErrorHandler`已添加 遗留的`GenericErrorHandler`及其用于记录批处理侦听器的子接口层次结构已被新的单一接口`CommonErrorHandler`所取代,其实现方式与`GenericErrorHandler`的大多数遗留实现方式相对应。有关更多信息,请参见[容器错误处理程序](#error-handlers)。 -#### [](#x28-lcc)2.1.7。监听器容器更改 +#### 2.1.7.监听器容器更改 默认情况下,`interceptBeforeTx`容器属性现在是`true`。 @@ -63,17 +63,17 @@ Spring for Apache Kafka 项目将核心 Spring 概念应用于基于 Kafka 的 有关更多信息,请参见[使用`KafkaMessageListenerContainer`]和[侦听器容器属性](#container-props)。 -#### [](#x28-serializers)2.1.8。序列化器/反序列化器更改 +#### 2.1.8.序列化器/反序列化器更改 现在提供了`DelegatingByTopicSerializer`和`DelegatingByTopicDeserializer`。有关更多信息,请参见[委派序列化器和反序列化器](#delegating-serialization)。 -#### [](#x28-dlpr)2.1.9。`DeadLetterPublishingRecover`变化 +#### 2.1.9.`DeadLetterPublishingRecover`变化 默认情况下,属性`stripPreviousExceptionHeaders`现在是`true`。 有关更多信息,请参见[管理死信记录头](#dlpr-headers)。 -#### [](#x28-retryable-topics-changes)2.1.10。可重排的主题更改 +#### 2.1.10.可重排的主题更改 现在,你可以对可重试和不可重试的主题使用相同的工厂。有关更多信息,请参见[指定 ListenerContainerFactory](#retry-topic-lcf)。 @@ -81,11 +81,11 @@ Spring for Apache Kafka 项目将核心 Spring 概念应用于基于 Kafka 的 使用可重排主题功能时引发的 KafkabackoffException 现在将在调试级别记录。如果需要更改日志级别以返回警告或将其设置为任何其他级别,请参见[[change-kboe-logging-level]]。 -## [](#introduction)3。导言 +## 3.导言 参考文档的第一部分是对 Spring Apache Kafka 和底层概念以及一些代码片段的高级概述,这些代码片段可以帮助你尽快启动和运行。 -### [](#quick-tour)3.1。快速游览 +### 3.1.快速游览 先决条件:你必须安装并运行 Apache Kafka。然后,你必须将 Apache Kafka(`spring-kafka`)的 Spring JAR 及其所有依赖项放在你的类路径上。最简单的方法是在构建工具中声明一个依赖项。 @@ -127,7 +127,7 @@ compile 'org.springframework.kafka:spring-kafka' 然而,最快的入门方法是使用[start.spring.io](https://start.spring.io)(或 Spring Tool Suits 和 IntelliJ Idea 中的向导)并创建一个项目,选择’ Spring for Apache Kafka’作为依赖项。 -#### [](#compatibility)3.1.1。相容性 +#### 3.1.1.相容性 此快速浏览适用于以下版本: @@ -137,15 +137,15 @@ compile 'org.springframework.kafka:spring-kafka' * 最低 爪哇 版本:8 -#### [](#getting-started)3.1.2。开始 +#### 3.1.2.开始 最简单的入门方法是使用[start.spring.io](https://start.spring.io)(或 Spring Tool Suits 和 IntelliJ Idea 中的向导)并创建一个项目,选择’ Spring for Apache Kafka’作为依赖项。请参阅[Spring Boot documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-kafka)以获取有关其对基础设施 bean 的自以为是的自动配置的更多信息。 这是一个最小的消费者应用程序。 -##### [](#spring-boot-consumer-app) Spring 引导消费者应用程序 +##### Spring 引导消费者应用程序 -例 1。应用程序 +例 1.应用程序 爪哇 @@ -200,9 +200,9 @@ spring.kafka.consumer.auto-offset-reset=earliest `NewTopic` Bean 导致在代理上创建主题;如果主题已经存在,则不需要该主题。 -##### [](#spring-boot-producer-app) Spring Boot Producer app +##### Spring Boot Producer app -例 3。应用程序 +例 3.应用程序 爪哇 @@ -253,14 +253,14 @@ class Application { } ``` -##### [](#with-java-configuration-no-spring-boot)带 爪哇 配置(no Spring boot) +##### 带 爪哇 配置(no Spring boot) | |Spring 对于 Apache Kafka 是设计用于在 Spring 应用程序上下文中使用的。
例如,如果你自己在 Spring 上下文之外创建侦听器容器,则并非所有函数都将工作,除非你满足容器实现的所有`…​Aware`接口。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| 下面是一个不使用 Spring 引导的应用程序的示例;它同时具有`Consumer`和`Producer`。 -例 4。没有引导 +例 4.没有引导 爪哇 @@ -418,15 +418,15 @@ class Config { 正如你所看到的,在不使用 Spring boot 时,你必须定义几个基础设施 bean。 -## [](#reference)4。参考文献 +## 4.参考文献 参考文档的这一部分详细介绍了构成 Spring Apache Kafka 的各种组件。[主要章节](#kafka)涵盖了用 Spring 开发 Kafka 应用程序的核心类。 -### [](#kafka)4.1。用 Spring 表示 Apache 卡夫卡 +### 4.1.用 Spring 表示 Apache Kafka 这一部分提供了对使用 Spring 表示 Apache Kafka 的各种关注的详细解释。欲了解一个简短但不太详细的介绍,请参见[Quick Tour](#quick-tour)。 -#### [](#connecting)4.1.1。连接到 Kafka +#### 4.1.1.连接到 Kafka * `KafkaAdmin`-见[配置主题](#configuring-topics) @@ -440,7 +440,7 @@ class Config { 有关更多信息,请参见 Javadocs。 -##### [](#factory-listeners)工厂听众 +##### 工厂听众 从版本 2.5 开始,`DefaultKafkaProducerFactory`和`DefaultKafkaConsumerFactory`可以配置为`Listener`,以便在创建或关闭生产者或消费者时接收通知。 @@ -478,7 +478,7 @@ interface Listener { 该框架提供了可以做到这一点的侦听器;参见[千分尺本机度量](#micrometer-native)。 -#### [](#configuring-topics)4.1.2。配置主题 +#### 4.1.2.配置主题 如果你在应用程序上下文中定义了`KafkaAdmin` Bean,那么它可以自动向代理添加主题。为此,你可以将每个主题的`NewTopic``@Bean`添加到应用程序上下文中。版本 2.3 引入了一个新的类`TopicBuilder`,以使创建这样的 bean 更加方便。下面的示例展示了如何做到这一点: @@ -654,15 +654,15 @@ private KafkaAdmin admin; client.close(); ``` -#### [](#sending-messages)4.1.3。发送消息 +#### 4.1.3.发送消息 本节介绍如何发送消息。 -##### [](#kafka-template)使用`KafkaTemplate` +##### 使用`KafkaTemplate` 本节介绍如何使用`KafkaTemplate`发送消息。 -###### [](#overview)概述 +###### 概述 `KafkaTemplate`封装了一个生成器,并提供了将数据发送到 Kafka 主题的方便方法。下面的清单显示了`KafkaTemplate`中的相关方法: @@ -843,11 +843,11 @@ future.addCallback(result -> { 如果你希望阻止发送线程以等待结果,则可以调用 Future 的`get()`方法;建议使用带有超时的方法。你可能希望在等待之前调用`flush()`,或者,为了方便起见,模板具有一个带有`autoFlush`参数的构造函数,该构造函数将在每次发送时使模板`flush()`。只有当你设置了`linger.ms`producer 属性并希望立即发送部分批处理时,才需要刷新。 -###### [](#examples)示例 +###### 示例 本节展示了向 Kafka 发送消息的示例: -例 5。非阻塞(异步) +例 5.非阻塞(异步) ``` public void sendToKafka(final MyOutputData data) { @@ -891,7 +891,7 @@ public void sendToKafka(final MyOutputData data) { 注意,`ExecutionException`的原因是`KafkaProducerException`具有`failedProducerRecord`属性。 -##### [](#routing-template)使用`RoutingKafkaTemplate` +##### 使用`RoutingKafkaTemplate` 从版本 2.5 开始,你可以使用`RoutingKafkaTemplate`在运行时基于目标`topic`名称选择生产者。 @@ -941,7 +941,7 @@ public class Application { 对于另一种实现类似结果的技术,但具有向相同主题发送不同类型的附加功能,请参见[委派序列化器和反序列化器](#delegating-serialization)。 -##### [](#producer-factory)使用`DefaultKafkaProducerFactory` +##### 使用`DefaultKafkaProducerFactory` 如[使用`KafkaTemplate`](#kafka-template)中所示,使用`ProducerFactory`创建生产者。 @@ -978,7 +978,7 @@ void removeConfig(String configKey); 从版本 2.8 开始,如果你将序列化器作为对象(在构造函数中或通过 setter)提供,则工厂将调用`configure()`方法来使用配置属性对它们进行配置。 -##### [](#replying-template)使用`ReplyingKafkaTemplate` +##### 使用`ReplyingKafkaTemplate` 版本 2.1.3 引入了`KafkaTemplate`的子类来提供请求/回复语义。该类名为`ReplyingKafkaTemplate`,并具有两个附加方法;以下显示了方法签名: @@ -1178,7 +1178,7 @@ public ConcurrentMessageListenerContainer replyContainer( 从版本 2.3 开始,你可以自定义标题名称-模板有 3 个属性`correlationHeaderName`、`replyTopicHeaderName`和`replyPartitionHeaderName`。如果你的服务器不是 Spring 应用程序(或者不使用`@KafkaListener`),这是有用的。 -###### [](#exchanging-messages)请求/回复`Message`s +###### 请求/回复`Message`s 版本 2.7 在`ReplyingKafkaTemplate`中添加了发送和接收`spring-messaging`的`Message`抽象的方法: @@ -1195,7 +1195,7 @@ RequestReplyMessageFuture sendAndReceive(Message message); 如果需要为返回类型提供类型信息,请使用第二种方法来帮助消息转换器。这还允许相同的模板接收不同的类型,即使在答复中没有类型元数据,例如当服务器端不是 Spring 应用程序时也是如此。以下是后者的一个例子: -例 6。模板 Bean +例 6.模板 Bean Java @@ -1233,7 +1233,7 @@ fun template( } ``` -例 7。使用模板 +例 7.使用模板 Java @@ -1271,7 +1271,7 @@ val things = future2?.get(10, TimeUnit.SECONDS)?.payload things?.forEach(Consumer { thing1: Thing? -> log.info(thing1.toString()) }) ``` -##### [](#reply-message)回复类型消息 \ +##### 回复类型消息 \ 当`@KafkaListener`返回`Message`时,在版本为 2.5 之前的情况下,需要填充回复主题和相关 ID 头。在本例中,我们使用请求中的回复主题标头: @@ -1301,7 +1301,7 @@ public Message messageReturn(String in) { } ``` -##### [](#aggregating-request-reply)聚合多个回复 +##### 聚合多个回复 [使用`ReplyingKafkaTemplate`](#replying-template)中的模板严格用于单个请求/回复场景。对于单个消息的多个接收者返回答复的情况,可以使用`AggregatingReplyingKafkaTemplate`。这是[散-集 Enterprise 集成模式](https://www.enterpriseintegrationpatterns.com/patterns/messaging/BroadcastAggregate.html)客户端的一个实现。 @@ -1347,11 +1347,11 @@ public static final String PARTIAL_RESULTS_AFTER_TIMEOUT_TOPIC = "partialResults | |如果使用[`ErrorHandlingDeserializer`](#error-handling-deserializer)与此聚合模板,框架将不会自动检测`DeserializationException`s.
相反,记录(带有`null`值)将原封不动地返回,使用头文件中的反序列化异常。
建议应用程序调用实用程序方法`ReplyingKafkaTemplate.checkDeserialization()`方法来确定如果发生反序列化异常。
有关更多信息,请参见其 Javadocs。
此聚合模板也不会调用`replyErrorChecker`;你应该对回复的每个元素执行检查。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#receiving-messages)4.1.4。接收消息 +#### 4.1.4.接收消息 可以通过配置`MessageListenerContainer`并提供消息侦听器或使用`@KafkaListener`注释来接收消息。 -##### [](#message-listeners)消息侦听器 +##### 消息侦听器 当使用[消息侦听器容器](#message-listener-container)时,必须提供一个侦听器来接收数据。目前,消息侦听器有八个受支持的接口。下面的清单展示了这些接口: @@ -1421,7 +1421,7 @@ public interface BatchAcknowledgingConsumerAwareMessageListener extends Ba | |你不应该执行任何`Consumer`方法,这些方法会影响用户在监听器中的位置和或提交偏移;容器需要管理这些信息。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#message-listener-container)消息侦听器容器 +##### 消息侦听器容器 提供了两个`MessageListenerContainer`实现: @@ -1442,7 +1442,7 @@ public interface BatchAcknowledgingConsumerAwareMessageListener extends Ba 从版本 2.3.8、2.4.6 开始,当并发性大于 1 时,`ConcurrentMessageListenerContainer`现在支持[静态成员](https://kafka.apache.org/documentation/#static_membership)。`group.instance.id`后缀为`-n`,后缀为`n`,起始于`1`。这与增加的`session.timeout.ms`一起,可以用来减少重新平衡事件,例如,当应用程序实例重新启动时。 -###### [](#kafka-container)使用`KafkaMessageListenerContainer` +###### 使用`KafkaMessageListenerContainer` 以下构造函数可用: @@ -1502,7 +1502,7 @@ return container; 从版本 2.8 开始,在创建消费者工厂时,如果你将反序列化器作为对象(在构造函数中或通过 setter)提供,工厂将调用`configure()`方法来使用配置属性对它们进行配置。 -###### [](#using-ConcurrentMessageListenerContainer)使用`ConcurrentMessageListenerContainer` +###### 使用`ConcurrentMessageListenerContainer` 单个构造函数类似于`KafkaListenerContainer`构造函数。下面的清单显示了构造函数的签名: @@ -1529,7 +1529,7 @@ public ConcurrentMessageListenerContainer(ConsumerFactory consumerFactory, 从版本 2.3 开始,`ContainerProperties`提供了一个`idleBetweenPolls`选项,让侦听器容器中的主循环在`KafkaConsumer.poll()`调用之间休眠。从所提供的选项和`max.poll.interval.ms`消费者配置和当前记录批处理时间之间的差值中选择一个实际的睡眠间隔作为最小值。 -###### [](#committing-offsets)提交偏移 +###### 提交偏移 为提交偏移提供了几个选项。如果`enable.auto.commit`消费者属性是`true`,Kafka 将根据其配置自动提交偏移。如果是`false`,则容器支持几个`AckMode`设置(在下一个列表中进行了描述)。默认的`AckMode`是`BATCH`。从版本 2.3 开始,该框架将`enable.auto.commit`设置为`false`,除非在配置中明确设置。以前,如果未设置属性,则使用 Kafka 默认值(`true`)。 @@ -1587,24 +1587,24 @@ public interface Acknowledgment { | |当通过组管理使用分区分配时,重要的是要确保`sleep`参数(加上处理来自上一次投票的记录所花费的时间)小于消费者`max.poll.interval.ms`属性。| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#container-auto-startup)侦听器容器自动启动 +###### 侦听器容器自动启动 侦听器容器实现`SmartLifecycle`,而`autoStartup`默认情况下是`true`。容器在后期启动(`Integer.MAX-VALUE - 100`)。实现`SmartLifecycle`以处理来自侦听器的数据的其他组件应该在较早的阶段启动。`- 100`为后面的阶段留出了空间,以使组件能够在容器之后自动启动。 -##### [](#ooo-commits)手动提交偏移 +##### 手动提交偏移 通常,当使用`AckMode.MANUAL`或`AckMode.MANUAL_IMMEDIATE`时,必须按顺序确认确认,因为 Kafka 不为每个记录维护状态,只为每个组/分区维护一个提交的偏移量。从版本 2.8 开始,你现在可以设置容器属性`asyncAcks`,它允许以任何顺序确认投票返回的记录的确认。侦听器容器将推迟顺序外的提交,直到收到缺少的确认。消费者将被暂停(没有新的记录交付),直到前一次投票的所有补偿都已提交。 | |虽然该特性允许应用程序异步处理记录,但应该理解的是,它增加了在发生故障后重复交付的可能性。| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#kafka-listener-annotation)`@KafkaListener`注释 +##### `@KafkaListener`注释 `@KafkaListener`注释用于指定 Bean 方法作为侦听器容器的侦听器。 Bean 包装在`MessagingMessageListenerAdapter`中配置有各种特征,例如转换器来转换数据,如果需要,以匹配该方法的参数。 可以使用`#{…​}`或属性占位符(`${…​}`)使用 SPEL 配置注释上的大多数属性。有关更多信息,请参见[Javadoc](https://docs.spring.io/spring-kafka/api/org/springframework/kafka/annotation/KafkaListener.html)。 -###### [](#record-listener)记录收听者 +###### 记录收听者 `@KafkaListener`注释为简单的 POJO 侦听器提供了一种机制。下面的示例展示了如何使用它: @@ -1666,7 +1666,7 @@ public void listen(String data) { } ``` -###### [](#manual-assignment)显式分区分配 +###### 显式分区分配 你还可以使用显式的主题和分区(以及它们的初始偏移量)来配置 POJO 侦听器。下面的示例展示了如何做到这一点: @@ -1728,7 +1728,7 @@ public void listen(ConsumerRecord record) { 初始偏移量将应用于所有 6 个分区。 -###### [](#manual-acknowledgment)手动确认 +###### 手动确认 当使用 Manual`AckMode`时,还可以向监听器提供`Acknowledgment`。下面的示例还展示了如何使用不同的容器工厂。 @@ -1741,7 +1741,7 @@ public void listen(String data, Acknowledgment ack) { } ``` -###### [](#consumer-record-metadata)消费者记录元数据 +###### 消费者记录元数据 最后,关于记录的元数据可以从消息头获得。你可以使用以下头名称来检索消息的头: @@ -1784,7 +1784,7 @@ public void listen(String str, ConsumerRecordMetadata meta) { 这包含来自`ConsumerRecord`的所有数据,除了键和值。 -###### [](#batch-listeners)批处理侦听器 +###### 批处理侦听器 从版本 1.1 开始,你可以配置`@KafkaListener`方法来接收从消费者投票中接收到的整批消费者记录。要将侦听器容器工厂配置为创建批处理侦听器,你可以设置`batchListener`属性。下面的示例展示了如何做到这一点: @@ -1873,7 +1873,7 @@ public void pollResults(ConsumerRecords records) { | |如果容器工厂配置了`RecordFilterStrategy`,则对于`ConsumerRecords`侦听器将忽略它,并发出`WARN`日志消息。
如果使用`>`形式的侦听器,则只能使用批侦听器过滤记录。默认情况下,
,记录是一次过滤一次的;从版本 2.8 开始,你可以覆盖`filterBatch`以在一个调用中过滤整个批处理。| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#annotation-properties)注释属性 +###### 注释属性 从版本 2.0 开始,`id`属性(如果存在)被用作 Kafka Consumer`group.id`属性,如果存在,则覆盖 Consumer 工厂中的配置属性。还可以显式地将`groupId`设置为`idIsGroup`,也可以将`idIsGroup`设置为 false,以恢复以前使用消费者工厂`group.id`的行为。 @@ -1959,7 +1959,7 @@ public void listen2(byte[] in) { } ``` -##### [](#listener-group-id)获取消费者`group.id` +##### 获取消费者`group.id` 当在多个容器中运行相同的侦听器代码时,能够确定记录来自哪个容器(由其`group.id`消费者属性标识)可能是有用的。 @@ -1976,13 +1976,13 @@ public void listener(@Payload String foo, | |这在接收`List`记录的记录侦听器和批处理侦听器中可用。**不是**在接收`ConsumerRecords`参数的批处理侦听器中可用。
在这种情况下使用`KafkaUtils`机制。| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#container-thread-naming)容器线程命名 +##### 容器线程命名 侦听器容器当前使用两个任务执行器,一个用于调用使用者,另一个用于在 Kafka 消费者属性`enable.auto.commit`为`false`时调用侦听器。你可以通过设置容器的`consumerExecutor`和`listenerExecutor`属性来提供自定义执行器。当使用池执行程序时,确保有足够多的线程可用来处理使用它们的所有容器之间的并发性。当使用`ConcurrentMessageListenerContainer`时,来自每个使用者的线程都用于每个使用者(`concurrency`)。 如果不提供消费者执行器,则使用`SimpleAsyncTaskExecutor`。此执行器创建名称与`-C-1`(使用者线程)类似的线程。对于`ConcurrentMessageListenerContainer`,线程名称的``部分变成`-m`,其中`m`表示消费者实例。`n`每次启动容器时都会增加。所以,具有 Bean 名称的`container`,此容器中的线程将被命名为`container-0-C-1`、`container-1-C-1`等,在容器被第一次启动之后;`container-0-C-2`、`container-1-C-2`等,在停止之后又被随后的启动。 -##### [](#kafka-listener-meta)`@KafkaListener`作为元注释 +##### `@KafkaListener`作为元注释 从版本 2.2 开始,你现在可以使用`@KafkaListener`作为元注释。下面的示例展示了如何做到这一点: @@ -2013,7 +2013,7 @@ public void listen1(String in) { } ``` -##### 在类上[](#class-level-kafkalistener)`@KafkaListener` +##### 在类上`@KafkaListener` 在类级别上使用`@KafkaListener`时,必须在方法级别上指定`@KafkaHandler`。在发送消息时,将使用转换后的消息有效负载类型来确定调用哪个方法。下面的示例展示了如何做到这一点: @@ -2065,7 +2065,7 @@ void listen(Object in, @Header(KafkaHeaders.RECORD_METADATA) ConsumerRecordMetad } ``` -##### [](#kafkalistener-attrs)`topic`属性修改 +##### `topic`属性修改 从版本 2.7.2 开始,你现在可以在创建容器之前以编程方式修改注释属性。为此,将一个或多个`KafkaListenerAnnotationBeanPostProcessor.AnnotationEnhancer`添加到应用程序上下文。`AnnotationEnhancer`是一个`BiFunction, AnnotatedElement, Map`,并且必须返回属性映射。属性值可以包含 SPEL 和/或属性占位符;在执行任何解析之前都会调用增强器。如果存在多个增强器,并且它们实现`Ordered`,则将按顺序调用它们。 @@ -2087,7 +2087,7 @@ public static AnnotationEnhancer groupIdEnhancer() { } ``` -##### [](#kafkalistener-lifecycle)`@KafkaListener`生命周期管理 +##### `@KafkaListener`生命周期管理 为`@KafkaListener`注释创建的侦听器容器不是应用程序上下文中的 bean。相反,它们被注册在类型`KafkaListenerEndpointRegistry`的基础结构 Bean 中。 Bean 由框架自动声明并管理容器的生命周期;它将自动启动将`autoStartup`设置为`true`的任何容器。由所有容器工厂创建的所有容器必须在相同的`phase`中。有关更多信息,请参见[监听器容器自动启动](#container-auto-startup)。你可以通过使用注册表以编程方式管理生命周期。启动或停止注册表将启动或停止所有已注册的容器。或者,你可以通过使用其`id`属性获得对单个容器的引用。你可以在注释上设置`autoStartup`,这会覆盖配置到容器工厂中的默认设置。你可以从应用程序上下文中获得对 Bean 的引用,例如自动布线,以管理其注册的容器。下面的例子说明了如何做到这一点: @@ -2109,7 +2109,7 @@ private KafkaListenerEndpointRegistry registry; 注册中心仅维护其管理的容器的生命周期;声明为 bean 的容器不受注册中心的管理,可以从应用程序上下文中获得。可以通过调用注册表的`getListenerContainers()`方法获得托管容器的集合。版本 2.2.5 添加了一个方便的方法`getAllListenerContainers()`,该方法返回所有容器的集合,包括由注册中心管理的容器和声明为 bean 的容器。返回的集合将包括任何已初始化的原型 bean,但它不会初始化任何懒惰的 Bean 声明。 -##### [](#kafka-validation)`@KafkaListener``@Payload`验证 +##### `@KafkaListener``@Payload`验证 从版本 2.2 开始,现在更容易添加`Validator`来验证`@KafkaListener``@Payload`参数。以前,你必须配置一个自定义`DefaultMessageHandlerMethodFactory`并将其添加到注册商。现在,你可以将验证器添加到注册器本身。下面的代码展示了如何做到这一点: @@ -2183,7 +2183,7 @@ public KafkaListenerErrorHandler validationErrorHandler() { 从版本 2.5.11 开始,验证现在可以在类级侦听器中的`KafkaMessageListenerContainer`方法的有效负载上进行。参见[`@KafkaListener`on a class](#class-level-kafkalistener)。 -##### [](#rebalance-listeners)重新平衡听众 +##### 重新平衡听众 `ContainerProperties`具有一个名为`consumerRebalanceListener`的属性,它接受了 Kafka 客户机的`ConsumerRebalanceListener`接口的一个实现。如果不提供此属性,则容器将配置一个日志侦听器,该侦听器将在`INFO`级别记录重新平衡事件。该框架还添加了一个子接口`@KafkaListener`。下面的清单显示了`ConsumerAwareRebalanceListener`接口定义: @@ -2230,7 +2230,7 @@ containerProperties.setConsumerRebalanceListener(new ConsumerAwareRebalanceListe | |从版本 2.4 开始,已经添加了一个新的方法`onPartitionsLost()`(类似于`ConsumerRebalanceLister`中同名的方法)。
`ConsumerRebalanceLister`上的默认实现只调用`onPartionsRevoked`。
上的默认实现在`ConsumerAwareRebalanceListener`上什么也不做。,`org.springframework.messaging.Message`在向侦听器容器提供自定义侦听器(任一种类型)时,这很重要表示你的实现不调用`onPartitionsRevoked`from`onPartitionsLost`。
如果你实现`ConsumerRebalanceListener`,那么你应该覆盖默认的方法。
这是因为侦听器容器将从其实现的`onPartitionsRevoked`调用它自己的`onPartitionsLost`在调用你的实现中的方法之后。
如果你将实现委托给默认行为,则每次`onPartitionsRevoked`调用容器的侦听器上的方法时,都会调用两次`Consumer`。| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#annotation-send-to)使用`@SendTo`转发监听器结果 +##### 使用`@SendTo`转发监听器结果 从版本 2.0 开始,如果你还使用`@KafkaListener`注释`@KafkaListener`,并且方法调用返回一个结果,则结果将被转发到[一次语义学](#exactly-once)指定的主题。 @@ -2367,7 +2367,7 @@ public KafkaTemplate myReplyingTemplate() { | |如果侦听器方法返回`Iterable`,那么默认情况下,每个元素的值都会被发送,
从版本 2.3.5 开始,将`@KafkaListener`上的`splitIterables`属性设置为`false`,整个结果将作为单个`ProducerRecord`的值发送。
这需要在回复模板的生产者配置中有一个合适的序列化器,
但是,如果回复是`Iterable>`,则忽略该属性,并分别发送每条消息。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#filtering-messages)过滤消息 +##### 过滤消息 在某些情况下,例如重新平衡,已经处理过的消息可能会被重新传递。框架不能知道这样的消息是否已被处理。这是一个应用程序级函数。这被称为[幂等接收机](https://www.enterpriseintegrationpatterns.com/patterns/messaging/IdempotentReceiver.html)模式,并且 Spring 集成提供了[幂等接收机](https://www.enterpriseintegrationpatterns.com/patterns/messaging/IdempotentReceiver.html)。 @@ -2380,11 +2380,11 @@ Spring for Apache Kafka 项目还通过`FilteringMessageListenerAdapter`类提 | |如果你的`@KafkaListener`接收的是`ConsumerRecords`而不是`List>`,则忽略`FilteringBatchMessageListenerAdapter`,因为`ConsumerRecords`是不可变的。| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#retrying-deliveries)重试送货 +##### 重试送货 参见[处理异常](#annotation-error-handling)中的`DefaultErrorHandler`。 -##### [](#sequencing)按顺序开始`@KafkaListener`s +##### 按顺序开始`@KafkaListener`s 一个常见的用例是,在另一个侦听器消耗了一个主题中的所有记录之后,启动一个侦听器。例如,在处理来自其他主题的记录之前,你可能希望将一个或多个压缩主题的内容加载到内存中。从版本 2.7.3 开始,引入了一个新的组件`ContainerGroupSequencer`。它使用`@KafkaListener``containerGroup`属性将容器分组,并在当前组中的所有容器都空闲时启动下一个组中的容器。 @@ -2421,7 +2421,7 @@ ContainerGroupSequencer sequencer(KafkaListenerEndpointRegistry registry) { 作为旁白;以前,每个组中的容器都被添加到类型`Collection`的 Bean 中,其 Bean 名称为`containerGroup`。现在不推荐这些集合,而支持类型`ContainerGroup`的 bean,其 Bean 名称是组名,后缀为`.group`;在上面的示例中,将有 2 个 bean`g1.group`和`g2.group`。`Collection`bean 将在未来的版本中被删除。 -##### [](#kafka-template-receive)使用`KafkaTemplate`接收 +##### 使用`KafkaTemplate`接收 本节介绍如何使用`KafkaTemplate`接收消息。 @@ -2441,87 +2441,87 @@ ConsumerRecords receive(Collection requested, Durati 使用最后两个方法,可以单独检索每个记录,并将结果组装到`ConsumerRecords`对象中。在为请求创建`TopicPartitionOffset`s 时,只支持正的绝对偏移量。 -#### [](#container-props)4.1.5。侦听器容器属性 +#### 4.1.5.侦听器容器属性 | Property | Default |说明| |---------------------------------------------------------------|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| []()[`ackCount`](#ackCount) | 1 |当`ackMode`为`COUNT`或`COUNT_TIME`时,提交挂起偏移之前的记录数量。| -| []()[`adviceChain`](#adviceChain) | `null` |一串`Advice`对象(例如`MethodInterceptor`关于建议)包装消息侦听器,按顺序调用。| -| []()[`ackMode`](#ackMode) | BATCH |控制提交偏移的频率-参见[提交补偿](#committing-offsets)。| -| []()[`ackOnError`](#ackOnError) | `false` |[不赞成`ErrorHandler.isAckAfterHandle()`]| -| []()[`ackTime`](#ackTime) | 5000 |当`ackMode`为`TIME`或`COUNT_TIME`时,提交挂起的偏移量的时间(以毫秒为单位)。| -| []()[`assignmentCommitOption`](#assignmentCommitOption) | LATEST\_ONLY \_NO\_TX |是否提交分配时的初始位置;默认情况下,只有当`ConsumerConfig.AUTO_OFFSET_RESET_CONFIG`是`latest`时,才会提交初始偏移,并且即使存在事务管理器,也不会在事务中运行。
有关可用选项的更多信息,请参见`ContainerProperties.AssignmentCommitOption`的 Javadocs。| -|[]()[`authExceptionRetryInterval`](#authExceptionRetryInterval)| `null` |当不是 null 时,当 Kafka 客户端抛出一个`AuthenticationException`或`AuthorizationException`时,一个`ContainerProperties.AssignmentCommitOption`在轮询之间休眠。`ContainerProperties.AssignmentCommitOption`当为 null 时,此类异常被认为是致命的,容器将停止。| -| []()[`clientId`](#clientId) | (empty string) |`client.id`消费者属性的前缀。
覆盖了消费者工厂`client.id`属性;在并发容器中,`ContainerProperties.AssignmentCommitOption`被添加为每个消费者实例的后缀。| -| []()[`checkDeserExWhenKeyNull`](#checkDeserExWhenKeyNull) | false |设置为`true`,以便在接收到`null``key`报头时始终检查`DeserializationException`报头。
在消费者代码无法确定已配置`ErrorHandlingDeserializer`时有用,例如在使用委托反序列化器时。| -| []()[`checkDeserExWhenValueNull`](#checkDeserExWhenValueNull) | false |设置为`true`,以便在接收到`DeserializationException``value`报头时始终检查`DeserializationException`报头。
在消费者代码无法确定已配置`ErrorHandlingDeserializer`时有用,例如在使用委托反序列化器时。| -| []()[`commitCallback`](#commitCallback) | `null` |当 present 和`syncCommits`是`false`时,在提交完成后调用的回调。| -| []()[`commitLogLevel`](#commitLogLevel) | DEBUG |用于提交偏移的日志的日志记录级别。| -| []()[`consumerRebalanceListener`](#consumerRebalanceListener) | `null` |一个重新平衡的监听器;参见[重新平衡听众](#rebalance-listeners)。| -| []()[`consumerStartTimout`](#consumerStartTimout) | 30s |在记录错误之前等待使用者启动的时间;如果使用线程不足的任务执行器,可能会发生这种情况。| -| []()[`consumerTaskExecutor`](#consumerTaskExecutor) |`SimpleAsyncTaskExecutor`|用于运行使用者线程的任务执行器。
默认执行器创建名为`-C-n`的线程;使用`KafkaMessageListenerContainer`,名称为 Bean 名称;使用`ConcurrentMessageListenerContainer`,名称为 Bean 名称,后缀为`-n`,其中 n 为每个子容器递增。| -| []()[`deliveryAttemptHeader`](#deliveryAttemptHeader) | `false` |见[传递尝试标头](#delivery-header)。| -| []()[`eosMode`](#eosMode) | `V2` |精确一次语义模式;参见`syncCommits`。| -| []()[`fixTxOffsets`](#fixTxOffsets) | `false` |当消费由事务生产者产生的记录时,并且消费者被定位在分区的末尾,延迟可能会被错误地报告为大于零,这是由于用于指示事务提交/回滚的伪记录,并且,可能,回滚记录的存在。
这在功能上不会影响消费者,但一些用户表示了担忧“lag”不为零。
将此属性设置为`true`,容器将更正此类错误报告的偏移量。
在下一次轮询之前执行检查,以避免给提交处理增加很大的复杂性。
在编写本文时,只有当消费者被配置为`isolation.level=read_committed`且`max.poll.records`大于 1 时,滞后才会得到纠正。
有关更多信息,请参见[卡夫卡-10683](https://issues.apache.org/jira/browse/KAFKA-10683)。| -| []()[`groupId`](#groupId) | `null` |覆盖消费者`group.id`属性;由`isolation.level=read_committed``id`或`groupId`属性自动设置。| -| []()[`idleBeforeDataMultiplier`](#idleBeforeDataMultiplier) | 5.0 |在接收到任何记录之前应用的
乘法器。
在接收到记录之后,不再应用乘法器。
自版本 2.8 起可用。| -| []()[`idleBetweenPolls`](#idleBetweenPolls) | 0 |用于通过在轮询之间休眠线程来减慢交付速度。
处理一批记录的时间加上该值必须小于`max.poll.interval.ms`消费者属性。| -| []()[`idleEventInterval`](#idleEventInterval) | `null` |设置时,启用`ListenerContainerIdleEvent`s 的发布,参见[应用程序事件](#events)和[检测空闲和无响应的消费者](#idle-containers)。
也参见`idleBeforeDataMultiplier`。| -|[]()[`idlePartitionEventInterval`](#idlePartitionEventInterval)| `null` |设置时,启用`ListenerContainerIdlePartitionEvent`s 的发布,参见[应用程序事件](#events)和[检测空闲和无响应的消费者](#idle-containers)。| -| []()[`kafkaConsumerProperties`](#kafkaConsumerProperties) | None |用于覆盖在消费者工厂上配置的任意消费者属性。| -| []()[`logContainerConfig`](#logContainerConfig) | `false` |设置为 true 以在信息级别记录所有容器属性.| -| []()[`messageListener`](#messageListener) | `null` |消息监听器。| -| []()[`micrometerEnabled`](#micrometerEnabled) | `true` |是否为用户线程维护千分尺计时器。| -| []()[`missingTopicsFatal`](#missingTopicsFatal) | `false` |如果代理上不存在配置的主题,则当 TRUE 阻止容器启动时。| -| []()[`monitorInterval`](#monitorInterval) | 30s |检查`NonResponsiveConsumerEvent`s.
使用者线程状态的频率参见[](#sequencing)和`pollTimeout`。| -| []()[`noPollThreshold`](#noPollThreshold) | 3.0 |乘以`pollTimeOut`,以确定是否发布`NonResponsiveConsumerEvent`。
见`monitorInterval`。| -| []()[`onlyLogRecordMetadata`](#onlyLogRecordMetadata) | `false` |设置为 false 以记录完整的使用者记录(错误地,调试日志等),而不仅仅是`[[email protected]](/cdn-cgi/l/email-protection)`。| -| []()[`pollTimeout`](#pollTimeout) | 5000 |超时传递到`Consumer.poll()`。| -| []()[`scheduler`](#scheduler) |`ThreadPoolTaskScheduler`|在其上运行消费者监视器任务的计划程序。| -| []()[`shutdownTimeout`](#shutdownTimeout) | 10000 |在 MS 中阻止`stop()`方法的最长时间,直到所有消费者停止并且在发布容器停止事件之前。| -| []()[`stopContainerWhenFenced`](#stopContainerWhenFenced) | `false` |如果抛出了`ProducerFencedException`,则停止侦听器容器。
有关更多信息,请参见[后回滚处理器](#after-rollback)。| -| []()[`stopImmediate`](#stopImmediate) | `false` |当容器被停止时,在当前记录之后停止处理,而不是在处理来自上一个轮询的所有记录之后。| -| []()[`subBatchPerPartition`](#subBatchPerPartition) | See desc. |当使用批处理侦听器时,如果这是`true`,则调用侦听器,并将轮询的结果分割为子批,每个分区一个。
默认`false`,除非使用`EOSMode.ALPHA`的事务-参见[一次语义学](#exactly-once)。| -| []()[`syncCommitTimeout`](#syncCommitTimeout) | `null` |当`syncCommits`时要使用的超时是`true`。
未设置时,容器将尝试确定`default.api.timeout.ms`消费者属性并使用它;否则将使用 60 秒。| -| []()[`syncCommits`](#syncCommits) | `true` |是否使用同步或异步提交进行偏移;请参见`commitCallback`。| -| []()[`topics` `topicPattern` `topicPartitions`](#topics) | n/a |已配置的主题、主题模式或显式分配的主题/分区。
互斥;至少必须提供一个;由`ContainerProperties`构造函数强制执行。| -| []()[`transactionManager`](#transactionManager) | `null` |见[交易](#transactions)。| +| | 1 |当`ackMode`为`COUNT`或`COUNT_TIME`时,提交挂起偏移之前的记录数量。| +| | `null` |一串`Advice`对象(例如`MethodInterceptor`关于建议)包装消息侦听器,按顺序调用。| +| 。| +| `]| +| | 5000 |当`ackMode`为`TIME`或`COUNT_TIME`时,提交挂起的偏移量的时间(以毫秒为单位)。| +| | LATEST\_ONLY \_NO\_TX |是否提交分配时的初始位置;默认情况下,只有当`ConsumerConfig.AUTO_OFFSET_RESET_CONFIG`是`latest`时,才会提交初始偏移,并且即使存在事务管理器,也不会在事务中运行。
有关可用选项的更多信息,请参见`ContainerProperties.AssignmentCommitOption`的 Javadocs。| +|| `null` |当不是 null 时,当 Kafka 客户端抛出一个`AuthenticationException`或`AuthorizationException`时,一个`ContainerProperties.AssignmentCommitOption`在轮询之间休眠。`ContainerProperties.AssignmentCommitOption`当为 null 时,此类异常被认为是致命的,容器将停止。| +| |`client.id`消费者属性的前缀。
覆盖了消费者工厂`client.id`属性;在并发容器中,`ContainerProperties.AssignmentCommitOption`被添加为每个消费者实例的后缀。| +| | false |设置为`true`,以便在接收到`null``key`报头时始终检查`DeserializationException`报头。
在消费者代码无法确定已配置`ErrorHandlingDeserializer`时有用,例如在使用委托反序列化器时。| +| | false |设置为`true`,以便在接收到`DeserializationException``value`报头时始终检查`DeserializationException`报头。
在消费者代码无法确定已配置`ErrorHandlingDeserializer`时有用,例如在使用委托反序列化器时。| +| | `null` |当 present 和`syncCommits`是`false`时,在提交完成后调用的回调。| +| | DEBUG |用于提交偏移的日志的日志记录级别。| +| 。| +| | 30s |在记录错误之前等待使用者启动的时间;如果使用线程不足的任务执行器,可能会发生这种情况。| +| |`SimpleAsyncTaskExecutor`|用于运行使用者线程的任务执行器。
默认执行器创建名为`-C-n`的线程;使用`KafkaMessageListenerContainer`,名称为 Bean 名称;使用`ConcurrentMessageListenerContainer`,名称为 Bean 名称,后缀为`-n`,其中 n 为每个子容器递增。| +| 。| +| | `V2` |精确一次语义模式;参见`syncCommits`。| +| 。| +| | `null` |覆盖消费者`group.id`属性;由`isolation.level=read_committed``id`或`groupId`属性自动设置。| +| | 5.0 |在接收到任何记录之前应用的
乘法器。
在接收到记录之后,不再应用乘法器。
自版本 2.8 起可用。| +| | 0 |用于通过在轮询之间休眠线程来减慢交付速度。
处理一批记录的时间加上该值必须小于`max.poll.interval.ms`消费者属性。| +| 。
也参见`idleBeforeDataMultiplier`。| +|。| +| | None |用于覆盖在消费者工厂上配置的任意消费者属性。| +| | `false` |设置为 true 以在信息级别记录所有容器属性.| +| | `null` |消息监听器。| +| | `true` |是否为用户线程维护千分尺计时器。| +| | `false` |如果代理上不存在配置的主题,则当 TRUE 阻止容器启动时。| +| 和`pollTimeout`。| +| | 3.0 |乘以`pollTimeOut`,以确定是否发布`NonResponsiveConsumerEvent`。
见`monitorInterval`。| +| `。| +| `。| +| |`ThreadPoolTaskScheduler`|在其上运行消费者监视器任务的计划程序。| +| `方法的最长时间,直到所有消费者停止并且在发布容器停止事件之前。| +| 。| +| | `false` |当容器被停止时,在当前记录之后停止处理,而不是在处理来自上一个轮询的所有记录之后。| +| 。| +| | `null` |当`syncCommits`时要使用的超时是`true`。
未设置时,容器将尝试确定`default.api.timeout.ms`消费者属性并使用它;否则将使用 60 秒。| +| | `true` |是否使用同步或异步提交进行偏移;请参见`commitCallback`。| +| | n/a |已配置的主题、主题模式或显式分配的主题/分区。
互斥;至少必须提供一个;由`ContainerProperties`构造函数强制执行。| +| 。| | Property | Default |说明| |-------------------------------------------------------------|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| []()[`afterRollbackProcessor`](#afterRollbackProcessor) |`DefaultAfterRollbackProcessor`|回滚事务后调用的`AfterRollbackProcessor`。| -|[]()[`applicationEventPublisher`](#applicationEventPublisher)| application context |事件发布者。| -| []()[`batchErrorHandler`](#batchErrorHandler) | See desc. |弃用-见`commonErrorHandler`。| -| []()[`batchInterceptor`](#batchInterceptor) | `null` |设置`BatchInterceptor`在调用批处理侦听器之前调用;不适用于记录侦听器。
另请参见`interceptBeforeTx`。| -| []()[`beanName`](#beanName) | bean name |容器的 Bean 名称;后缀为子容器的`-n`。| -| []()[`commonErrorHandler`](#commonErrorHandler) | See desc. |`DefaultErrorHandler`或`null`当使用`DefaultAfterRollbackProcessor`时提供`transactionManager`。
见[容器错误处理程序](#error-handlers)。| -| []()[`containerProperties`](#containerProperties) | `ContainerProperties` |容器属性实例。| -| []()[`errorHandler`](#errorHandler) | See desc. |弃用-见`commonErrorHandler`。| -| []()[`genericErrorHandler`](#genericErrorHandler) | See desc. |弃用-见`commonErrorHandler`。| -| []()[`groupId`](#groupId) | See desc. |`default.api.timeout.ms`,如果存在,否则来自消费工厂的`group.id`属性。| -| []()[`interceptBeforeTx`](#interceptBeforeTx) | `true` |确定是在事务开始之前还是之后调用`recordInterceptor`。| -| []()[`listenerId`](#listenerId) | See desc. |Bean 用户配置容器的名称或`@KafkaListener`s 的`id`属性。| -| []()[`pauseRequested`](#pauseRequested) | (read only) |如果请求了消费者暂停,则为真。| -| []()[`recordInterceptor`](#recordInterceptor) | `null` |设置`RecordInterceptor`在调用记录侦听器之前调用;不适用于批处理侦听器。
另请参见`interceptBeforeTx`。| -| []()[`topicCheckTimeout`](#topicCheckTimeout) | 30s |当`missingTopicsFatal`容器属性是`true`时,要等待多长时间(以秒为单位)才能完成`describeTopics`操作。| +| |`DefaultAfterRollbackProcessor`|回滚事务后调用的`AfterRollbackProcessor`。| +|| application context |事件发布者。| +| | See desc. |弃用-见`commonErrorHandler`。| +| | `null` |设置`BatchInterceptor`在调用批处理侦听器之前调用;不适用于记录侦听器。
另请参见`interceptBeforeTx`。| +| | bean name |容器的 Bean 名称;后缀为子容器的`-n`。| +| 。| +| | `ContainerProperties` |容器属性实例。| +| | See desc. |弃用-见`commonErrorHandler`。| +| | See desc. |弃用-见`commonErrorHandler`。| +| | See desc. |`default.api.timeout.ms`,如果存在,否则来自消费工厂的`group.id`属性。| +| | `true` |确定是在事务开始之前还是之后调用`recordInterceptor`。| +| | See desc. |Bean 用户配置容器的名称或`@KafkaListener`s 的`id`属性。| +| |如果请求了消费者暂停,则为真。| +| | `null` |设置`RecordInterceptor`在调用记录侦听器之前调用;不适用于批处理侦听器。
另请参见`interceptBeforeTx`。| +| | 30s |当`missingTopicsFatal`容器属性是`true`时,要等待多长时间(以秒为单位)才能完成`describeTopics`操作。| | Property | Default |说明| |-------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------| -| []()[`assignedPartitions`](#assignedPartitions) |(read only)|当前分配给这个容器的分区(显式或非显式)。| -|[]()[`assignedPartitionsByClientId`](#assignedPartitionsByClientId)|(read only)|当前分配给这个容器的分区(显式或非显式)。| -| []()[`clientIdSuffix`](#clientIdSuffix) | `null` |并发容器用于为每个子容器的使用者提供唯一的`client.id`。| -| []()[`containerPaused`](#containerPaused) | n/a |如果请求暂停,而消费者实际上已经暂停,则为真。| +| |当前分配给这个容器的分区(显式或非显式)。| +||当前分配给这个容器的分区(显式或非显式)。| +| | `null` |并发容器用于为每个子容器的使用者提供唯一的`client.id`。| +| | n/a |如果请求暂停,而消费者实际上已经暂停,则为真。| | Property | Default |说明| |-------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| []()[`alwaysClientIdSuffix`](#alwaysClientIdSuffix) | `true` |设置为 FALSE 以禁止在`concurrency`消费者属性中添加后缀,此时`concurrency`仅为 1。| -| []()[`assignedPartitions`](#assignedPartitions) |(read only)|当前分配给这个容器的子`KafkaMessageListenerContainer`s 的分区的集合(显式或非显式)。| -|[]()[`assignedPartitionsByClientId`](#assignedPartitionsByClientId)|(read only)|当前分配给这个容器的子容器`KafkaMessageListenerContainer`s(显式或非显式)的分区,由子容器的使用者的`client.id`属性进行键控。| -| []()[`concurrency`](#concurrency) | 1 |要管理的子`KafkaMessageListenerContainer`s 的数量。| -| []()[`containerPaused`](#containerPaused) | n/a |如果请求了暂停,并且所有子容器的使用者实际上已经暂停,则为真。| -| []()[`containers`](#containers) | n/a |对所有子`KafkaMessageListenerContainer`s 的引用。| +| | `true` |设置为 FALSE 以禁止在`concurrency`消费者属性中添加后缀,此时`concurrency`仅为 1.| +| |当前分配给这个容器的子`KafkaMessageListenerContainer`s 的分区的集合(显式或非显式)。| +||当前分配给这个容器的子容器`KafkaMessageListenerContainer`s(显式或非显式)的分区,由子容器的使用者的`client.id`属性进行键控。| +| | 1 |要管理的子`KafkaMessageListenerContainer`s 的数量。| +| | n/a |如果请求了暂停,并且所有子容器的使用者实际上已经暂停,则为真。| +| | n/a |对所有子`KafkaMessageListenerContainer`s 的引用。| -#### [](#events)4.1.6。应用程序事件 +#### 4.1.6.应用程序事件 以下 Spring 应用程序事件由侦听器容器及其使用者发布: @@ -2656,7 +2656,7 @@ if (event.getReason.equals(Reason.FENCED)) { } ``` -##### [](#idle-containers)检测空闲和无响应的消费者 +##### 检测空闲和无响应的消费者 尽管效率很高,但异步用户的一个问题是检测它们何时空闲。如果一段时间内没有消息到达,你可能需要采取一些措施。 @@ -2696,7 +2696,7 @@ public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() { 从版本 2.6.2 开始,如果容器已经发布了`ListenerContainerIdleEvent`,那么当随后接收到一条记录时,它将发布`ListenerContainerNoLongerIdleEvent`。 -##### [](#event-consumption)事件消费 +##### 事件消费 你可以通过实现`ApplicationListener`来捕获这些事件——或者是一个普通的侦听器,或者是一个缩小到只接收这个特定事件的侦听器。还可以使用 Spring Framework4.2 中介绍的`@EventListener`。 @@ -2730,11 +2730,11 @@ public class Listener { | |如果希望使用空闲事件停止 Lister 容器,则不应在调用侦听器的线程上调用`container.stop()`。
这样做会导致延迟和不必要的日志消息。相反,
,你应该将事件传递给另一个线程,该线程可以停止容器。
此外,如果容器实例是一个子容器,则不应该`stop()`容器实例。
你应该停止并发容器。| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#current-positions-when-idle)空闲时的当前位置 +###### 空闲时的当前位置 请注意,你可以通过在侦听器中实现`ConsumerSeekAware`来获得检测到空闲时的当前位置。见`onIdleContainer()`in[寻求一种特定的抵消](#seek)。 -#### [](#topicpartition-initial-offset)4.1.7。主题/分区初始偏移 +#### 4.1.7.主题/分区初始偏移 有几种方法可以设置分区的初始偏移量。 @@ -2746,7 +2746,7 @@ public class Listener { * 对于现有的组 ID,初始偏移量是该组 ID 的当前偏移量。但是,你可以在初始化期间(或之后的任何时间)寻求特定的偏移量。 -#### [](#seek)4.1.8。寻求一种特定的抵消 +#### 4.1.8.寻求一种特定的抵消 为了进行查找,侦听器必须实现`ConsumerSeekAware`,它具有以下方法: @@ -2960,7 +2960,7 @@ public class SomeOtherBean { } ``` -#### [](#container-factory)4.1.9。集装箱工厂 +#### 4.1.9.集装箱工厂 正如[`@KafkaListener`注释](#kafka-listener-annotation)中所讨论的,`ConcurrentKafkaListenerContainerFactory`用于为带注释的方法创建容器。 @@ -2994,7 +2994,7 @@ public KafkaListenerContainerFactory kafkaListenerContainerFactory() { } ``` -#### [](#thread-safety)4.1.10。螺纹安全 +#### 4.1.10.螺纹安全 当使用并发消息侦听器容器时,将在所有使用者线程上调用单个侦听器实例。因此,侦听器需要是线程安全的,最好是使用无状态侦听器。如果不可能使你的侦听器线程安全,或者添加同步将大大降低添加并发性的好处,那么你可以使用以下几种技术中的一种: @@ -3009,9 +3009,9 @@ public KafkaListenerContainerFactory kafkaListenerContainerFactory() { | |默认情况下,应用程序上下文的事件多播器调用调用调用线程上的事件侦听器。
如果你将多播器更改为使用异步执行器,则线程清理将无效。| |---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#micrometer)4.1.11。监测 +#### 4.1.11.监测 -##### [](#monitoring-listener-performance)监视侦听器性能 +##### 监视侦听器性能 从版本 2.3 开始,如果在类路径上检测到`Micrometer`,并且在应用程序上下文中存在一个`MeterRegistry`,则侦听器容器将自动为侦听器创建和更新微米计`Timer`s。可以通过将`ContainerProperty``micrometerEnabled`设置为`false`来禁用计时器。 @@ -3030,7 +3030,7 @@ public KafkaListenerContainerFactory kafkaListenerContainerFactory() { | |使用并发容器,为每个线程创建计时器,`name`标记后缀为`-n`,其中 n 为`0`到`concurrency-1`。| |---|---------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#monitoring-kafkatemplate-performance)监控 Kafkatemplate 性能 +##### 监控 Kafkatemplate 性能 从版本 2.5 开始,如果在类路径上检测到`Micrometer`,并且在应用程序上下文中存在一个`MeterRegistry`,则模板将自动为发送操作创建和更新 Micrometer`Timer`s。可以通过将模板的`micrometerEnabled`属性设置为`false`来禁用计时器。 @@ -3046,7 +3046,7 @@ public KafkaListenerContainerFactory kafkaListenerContainerFactory() { 你可以使用模板的`micrometerTags`属性添加其他标记。 -##### [](#micrometer-native)千分尺本机度量 +##### 千分尺本机度量 从版本 2.5 开始,该框架提供[工厂监听器](#factory-listeners)来管理微米计`KafkaClientMetrics`实例,无论何时创建和关闭生产者和消费者。 @@ -3093,11 +3093,11 @@ double count = this.meterRegistry.get("kafka.producer.node.incoming.byte.total") 为`StreamsBuilderFactoryBean`提供了类似的侦听器-参见[Kafkastreams 测微仪支持](#streams-micrometer)。 -#### [](#transactions)4.1.12。交易 +#### 4.1.12.交易 本节描述了 Spring for Apache Kafka 如何支持事务。 -##### [](#overview-2)概述 +##### 概述 0.11.0.0 客户端库增加了对事务的支持。 Spring For Apache Kafka 通过以下方式增加了支持: @@ -3121,13 +3121,13 @@ double count = this.meterRegistry.get("kafka.producer.node.incoming.byte.total") | |从版本 2.5.8 开始,你现在可以在生产者工厂上配置`maxAge`属性,
这在使用事务生产者时很有用,这些生产者可能为代理的`transactional.id.expiration.ms`闲置。,
使用当前的`kafka-clients`,这可能会导致`ProducerFencedException`而不进行再平衡。
通过将`maxAge`设置为`transactional.id.expiration.ms`小于`transactional.id.expiration.ms`,工厂将刷新生产者,如果它已经超过了最大年龄。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#using-kafkatransactionmanager)使用`KafkaTransactionManager` +##### 使用`KafkaTransactionManager` `KafkaTransactionManager`是 Spring 框架`PlatformTransactionManager`的一个实现。为生产厂在其构造中的应用提供了参考.如果你提供了一个自定义的生产者工厂,那么它必须支持事务。见`ProducerFactory.transactionCapable()`。 你可以使用具有正常 Spring 事务支持的`KafkaTransactionManager`(`@Transactional`、`TransactionTemplate`等)。如果事务是活动的,则在事务范围内执行的任何`KafkaTemplate`操作都使用事务的`Producer`。Manager 根据成功或失败提交或回滚事务。你必须配置`KafkaTemplate`以使用与事务管理器相同的`ProducerFactory`。 -##### [](#transaction-synchronization)事务同步 +##### 事务同步 本节引用仅生产者事务(不是由侦听器容器启动的事务);有关在容器启动事务时链接事务的信息,请参见[使用消费者发起的交易](#container-transaction-manager)。 @@ -3148,13 +3148,13 @@ public void process(List things) { | |从版本 2.5.17、2.6.12、2.7.9 和 2.8.0 开始,如果在同步事务上提交失败(在主事务提交之后),异常将被抛给调用者,
以前,这一点被静默忽略(在调试时记录),
应用程序应该采取补救措施,如果有必要,对已提交的主要事务进行补偿。| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#container-transaction-manager)使用消费者发起的事务 +##### 使用消费者发起的事务 从版本 2.7 开始,`ChainedKafkaTransactionManager`现在已被弃用;有关更多信息,请参见 Javadocs 的超类`ChainedTransactionManager`。相反,在容器中使用`KafkaTransactionManager`来启动 Kafka 事务,并用`@Transactional`注释侦听器方法来启动另一个事务。 有关链接 JDBC 和 Kafka 事务的示例应用程序,请参见[[ex-jdbc-sync]]。 -##### [](#kafkatemplate-local-transactions)`KafkaTemplate`本地事务 +##### `KafkaTemplate`本地事务 你可以使用`KafkaTemplate`在本地事务中执行一系列操作。下面的示例展示了如何做到这一点: @@ -3171,7 +3171,7 @@ boolean result = template.executeInTransaction(t -> { | |如果进程中有`KafkaTransactionManager`(或同步)事务,则不使用它。
而是使用新的“嵌套”事务。| |---|--------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#transaction-id-prefix)`transactionIdPrefix` +##### `transactionIdPrefix` 正如[概述](#transactions)中提到的,生产者工厂配置了此属性,以构建生产者`transactional.id`属性。在使用`EOSMode.ALPHA`运行应用程序的多个实例时,当在监听器容器线程上生成记录时,在所有实例上都必须相同,以满足 fencing zombies(在概述中也提到了)的要求。但是,当使用侦听器容器启动的**不是**事务生成记录时,每个实例的前缀必须不同。版本 2.3 使此配置更简单,尤其是在 Spring 启动应用程序中。在以前的版本中,你必须创建两个生产者工厂和`KafkaTemplate`S-一个用于在侦听器容器线程上生成记录,另一个用于由`kafkaTemplate.executeInTransaction()`或由`@Transactional`方法上的事务拦截器启动的独立事务。 @@ -3181,11 +3181,11 @@ boolean result = template.executeInTransaction(t -> { 当使用`EOSMode.BETA`(代理版本 \>=2.5)时,此问题(`transactional.id`的不同规则)已被消除;请参见[一次语义学](#exactly-once)。 -##### [](#tx-template-mixed)`KafkaTemplate`事务性和非事务性发布 +##### `KafkaTemplate`事务性和非事务性发布 通常,当`KafkaTemplate`是事务性的(配置了能够处理事务的生产者工厂)时,事务是必需的。事务可以通过`TransactionTemplate`、`@Transactional`方法启动,调用`executeInTransaction`,或者在配置`KafkaTransactionManager`时通过侦听器容器启动。在事务范围之外使用模板的任何尝试都会导致模板抛出`IllegalStateException`。从版本 2.4.3 开始,你可以将模板的`allowNonTransactional`属性设置为`true`。在这种情况下,通过调用`ProducerFactory`的`createNonTransactionalProducer()`方法,模板将允许操作在没有事务的情况下运行;生产者将被缓存或线程绑定,以进行正常的重用。参见[使用`DefaultKafkaProducerFactory`](#producer-factory)。 -##### [](#transactions-batch)具有批处理侦听器的事务 +##### 具有批处理侦听器的事务 当侦听器在使用事务时失败时,将调用`AfterRollbackProcessor`在回滚发生后采取一些操作。当在记录侦听器中使用默认的`AfterRollbackProcessor`时,将执行查找,以便重新交付失败的记录。但是,对于批处理侦听器,整个批处理将被重新交付,因为框架不知道批处理中的哪个记录失败了。有关更多信息,请参见[后回滚处理器](#after-rollback)。 @@ -3236,7 +3236,7 @@ public static class Config { } ``` -#### [](#exactly-once)4.1.13。一次语义学 +#### 4.1.13.一次语义学 你可以为侦听器容器提供一个`KafkaAwareTransactionManager`实例。当这样配置时,容器在调用侦听器之前启动一个事务。侦听器执行的任何`KafkaTemplate`操作都参与事务。如果侦听器在使用`BatchMessageListener`时成功地处理该记录(或多个记录),则容器在事务管理器提交事务之前通过使用`producer.sendOffsetsToTransaction()`向事务发送偏移量。如果侦听器抛出异常,事务将被回滚,使用者将被重新定位,以便在下一次投票时可以检索回滚记录。有关更多信息和处理多次失败的记录,请参见[后回滚处理器](#after-rollback)。 @@ -3278,7 +3278,7 @@ Spring 对于 Apache Kafka 版本 2.5 及更高版本,支持两种 EOS 模式 `V1`和`V2`以前是`ALPHA`和`BETA`;它们已被更改以使框架与[KIP-732](https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2)对齐。 -#### [](#interceptors)4.1.14。将 Spring bean 连接到生产者/消费者拦截器 +#### 4.1.14.将 Spring bean 连接到生产者/消费者拦截器 Apache Kafka 提供了一种向生产者和消费者添加拦截器的机制。这些对象是由 Kafka 管理的,而不是 Spring,因此正常的 Spring 依赖注入不适用于在依赖的 Spring bean 中连接。但是,你可以使用拦截器`config()`方法手动连接这些依赖项。下面的 Spring 引导应用程序展示了如何通过覆盖 Boot 的默认工厂将一些依赖的 Bean 添加到配置属性中来实现这一点。 @@ -3410,7 +3410,7 @@ consumer interceptor in my foo bean Received test ``` -#### [](#pause-resume)4.1.15。暂停和恢复监听器容器 +#### 4.1.15.暂停和恢复监听器容器 版本 2.1.3 为侦听器容器添加了`pause()`和`resume()`方法。以前,你可以在`ConsumerAwareMessageListener`中暂停一个消费者,并通过监听`ListenerContainerIdleEvent`来恢复它,该监听提供了对`Consumer`对象的访问。虽然可以通过使用事件侦听器在空闲容器中暂停使用者,但在某些情况下,这不是线程安全的,因为不能保证在使用者线程上调用事件侦听器。为了安全地暂停和恢复消费者,你应该在侦听器容器上使用`pause`和`resume`方法。a`pause()`在下一个`poll()`之前生效;a`resume()`在当前`poll()`返回之后生效。当容器暂停时,它将继续`poll()`使用者,从而避免在使用组管理时进行重新平衡,但它不会检索任何记录。有关更多信息,请参见 Kafka 文档。 @@ -3478,15 +3478,15 @@ ConsumerResumedEvent [partitions=[pause.resume.topic-1, pause.resume.topic-0]] thing2 ``` -#### [](#pause-resume-partitions)4.1.16。在侦听器容器上暂停和恢复分区 +#### 4.1.16.在侦听器容器上暂停和恢复分区 从版本 2.7 开始,你可以通过使用侦听器容器中的`pausePartition(TopicPartition topicPartition)`和`resumePartition(TopicPartition topicPartition)`方法暂停并恢复分配给该使用者的特定分区的使用。暂停和恢复分别发生在`poll()`之前和之后,类似于`pause()`和`resume()`方法。如果请求了该分区的暂停,`isPartitionPauseRequested()`方法将返回 true。如果该分区已有效地暂停,`isPartitionPaused()`方法将返回 true。 另外,由于版本 2.7`ConsumerPartitionPausedEvent`和`ConsumerPartitionResumedEvent`实例与容器一起作为`source`属性和`TopicPartition`实例发布。 -#### [](#serdes)4.1.17。序列化、反序列化和消息转换 +#### 4.1.17.序列化、反序列化和消息转换 -##### [](#overview-3)概述 +##### 概述 Apache Kafka 提供了用于序列化和反序列化记录值及其键的高级 API。它存在于带有一些内置实现的`org.apache.kafka.common.serialization.Serializer`和`org.apache.kafka.common.serialization.Deserializer`抽象中。同时,我们可以通过使用`Producer`或`Consumer`配置属性来指定序列化器和反序列化器类。下面的示例展示了如何做到这一点: @@ -3502,7 +3502,7 @@ props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); 当你使用这个 API 时,`DefaultKafkaProducerFactory`和`DefaultKafkaConsumerFactory`还提供属性(通过构造函数或 setter 方法)来将自定义`Serializer`和`Deserializer`实例注入到目标`Producer`或`Consumer`中。同样,你可以通过构造函数传入`Supplier`或`Supplier`实例-这些`Supplier`s 在创建每个`Producer`或`Consumer`时被调用。 -##### [](#string-serde)字符串序列化 +##### 字符串序列化 自版本 2.5 以来, Spring for Apache Kafka 提供了`ToStringSerializer`和`ParseStringDeserializer`使用实体的字符串表示的类。它们依赖于方法`toString`和一些`Function`或`BiFunction`来解析字符串并填充实例的属性。通常,这会调用类上的一些静态方法,例如`parse`: @@ -3542,7 +3542,7 @@ ParseStringDeserializer deserializer = new ParseStringDeserializer<>((st 还提供了用于 Kafka 流的`ToFromStringSerde`。 -##### [](#json-serde)JSON +##### JSON Spring 对于 Apache Kafka 还提供了基于 JacksonJSON 对象映射器的和实现。`JsonSerializer`允许将任何 Java 对象写为 JSON`byte[]`。`JsonDeserializer`需要一个额外的`Class targetType`参数,以允许将已使用的`byte[]`反序列化到正确的目标对象。下面的示例展示了如何创建`JsonDeserializer`: @@ -3558,7 +3558,7 @@ JsonDeserializer thingDeserializer = new JsonDeserializer<>(Thing.class); 从版本 2.1 开始,你可以在记录`Headers`中传递类型信息,从而允许处理多个类型。此外,你可以通过使用以下 Kafka 属性来配置序列化器和反序列化器。如果分别为`KafkaConsumer`和`KafkaProducer`提供了`Deserializer`实例,则它们没有任何作用。 -###### [](#serdes-json-config)配置属性 +###### 配置属性 * `JsonSerializer.ADD_TYPE_INFO_HEADERS`(默认`true`):你可以将其设置为`false`,以在`JsonSerializer`上禁用此功能(设置`addTypeInfo`属性)。 @@ -3587,7 +3587,7 @@ JsonDeserializer thingDeserializer = new JsonDeserializer<>(Thing.class); | |从版本 2.8 开始,如果你按照[纲领性建设](#prog-json)中所示的编程方式构造序列化器或反序列化器,那么上述属性将由工厂应用,只要你没有显式地设置任何属性(使用`set*()`方法或使用 Fluent API)。
以前,在以编程方式创建时,配置属性从未被应用;如果直接显式地在对象上设置属性,情况仍然是这样。| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -###### [](#serdes-mapping-types)映射类型 +###### 映射类型 从版本 2.2 开始,当使用 JSON 时,你现在可以通过使用前面列表中的属性来提供类型映射。以前,你必须在序列化器和反序列化器中自定义类型映射器。映射由`token:className`对的逗号分隔列表组成。在出站时,有效负载的类名被映射到相应的令牌。在入站时,类型头中的令牌将映射到相应的类名。 @@ -3621,7 +3621,7 @@ DefaultKafkaConsumerFactory cf = new DefaultKafkaConsumerFactory< new IntegerDeserializer(), new JsonDeserializer<>(Cat1.class, false)); ``` -###### [](#serdes-type-methods)使用方法确定类型 +###### 使用方法确定类型 从版本 2.5 开始,你现在可以通过属性配置反序列化器来调用一个方法来确定目标类型。如果存在,这将覆盖上面讨论的任何其他技术。如果数据是由不使用 Spring 序列化器的应用程序发布的,并且你需要根据数据或其他头来反序列化到不同类型,那么这可能是有用的。将这些属性设置为方法名-一个完全限定的类名,后面跟着方法名,中间隔一个句号`.`。方法必须声明为`public static`,具有三个签名之一`(String topic, byte[] data, Headers headers)`,`(byte[] data, Headers headers)`或`(byte[] data)`,并返回一个 Jackson`JavaType`。 @@ -3665,7 +3665,7 @@ public static JavaType thing1Thing2JavaTypeForTopic(String topic, byte[] data, H } ``` -###### [](#prog-json)纲领性建设 +###### 纲领性建设 从版本 2.3 开始,当以编程方式构建在生产者/消费者工厂中使用的序列化器/反序列化器时,你可以使用 Fluent API,这简化了配置。 @@ -3711,9 +3711,9 @@ JsonDeserializer deser = new JsonDeserializer<>() 或者,只要不使用 Fluent API 配置属性,或者不使用`set*()`方法设置属性,工厂将使用配置属性配置序列化器/反序列化器;参见[配置属性](#serdes-json-config)。 -##### [](#delegating-serialization)委托序列化器和反序列化器 +##### 委托序列化器和反序列化器 -###### [](#using-headers)使用头文件 +###### 使用头文件 版本 2.3 引入了`DelegatingSerializer`和`DelegatingDeserializer`,它们允许使用不同的键和/或值类型来生成和消费记录。制作者必须将标题`DelegatingSerializer.VALUE_SERIALIZATION_SELECTOR`设置为选择器值,用于选择要使用哪个序列化器作为该值,而`DelegatingSerializer.KEY_SERIALIZATION_SELECTOR`作为该键;如果找不到匹配项,则抛出`IllegalStateException`。 @@ -3742,7 +3742,7 @@ consumerProps.put(DelegatingDeserializer.VALUE_SERIALIZATION_SELECTOR_CONFIG, 有关将不同类型发送到不同主题的另一种技术,请参见[使用`RoutingKafkaTemplate`](#routing-template)。 -###### [](#by-type)按类型分列 +###### 按类型分列 2.8 版引入了`DelegatingByTypeSerializer`。 @@ -3759,7 +3759,7 @@ public ProducerFactory producerFactory(Map conf 从版本 2.8.3 开始,你可以将序列化器配置为检查是否可以从目标对象分配映射键,这在委托序列化器可以序列化子类时很有用。在这种情况下,如果有可亲的匹配,则应该提供一个有序的`Map`,例如一个`LinkedHashMap`。 -###### [](#by-topic)按主题 +###### 按主题 从版本 2.8 开始,`DelegatingByTopicSerializer`和`DelegatingByTopicDeserializer`允许基于主题名称选择序列化器/反序列化器。regex`Pattern`s 用于查找要使用的实例。可以使用构造函数或通过属性(用逗号分隔的列表`pattern:serializer`)来配置映射。 @@ -3791,7 +3791,7 @@ public ProducerFactory producerFactory(Map conf 当设置为`false`时,另一个属性`DelegatingByTopicSerialization.CASE_SENSITIVE`(默认`true`)会使主题查找不区分大小写。 -##### [](#retrying-deserialization)重试反序列化器 +##### 重试反序列化器 `RetryingDeserializer`使用委托`Deserializer`和`RetryTemplate`来重试反序列化,当委托在反序列化过程中可能出现瞬时错误时,例如网络问题。 @@ -3803,7 +3803,7 @@ ConsumerFactory cf = new DefaultKafkaConsumerFactory(myConsumerConfigs, 请参阅[spring-retry](https://github.com/spring-projects/spring-retry)项目,以配置带有重试策略、Back off 策略等的`RetryTemplate`项目。 -##### [](#messaging-message-conversion) Spring 消息传递消息转换 +##### Spring 消息传递消息转换 虽然`Serializer`和`Deserializer`API 从低级别的 Kafka`Consumer`和`Producer`透视图来看是非常简单和灵活的,但是在 Spring 消息传递级别,当使用`@KafkaListener`或[Spring Integration’s Apache Kafka Support](https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka)时,你可能需要更多的灵活性。为了让你能够轻松地转换`org.springframework.messaging.Message`, Spring for Apache Kafka 提供了一个`MessageConverter`的抽象,带有`MessagingMessageConverter`实现及其`JsonMessageConverter`(和子类)定制。你可以直接将`MessageConverter`注入`KafkaTemplate`实例中,并使用`AbstractKafkaListenerContainerFactory` Bean 对`@KafkaListener.containerFactory()`属性的定义。下面的示例展示了如何做到这一点: @@ -3855,7 +3855,7 @@ public void smart(Thing thing) { } ``` -###### [](#data-projection)使用 Spring 数据投影接口 +###### 使用 Spring 数据投影接口 从版本 2.1.1 开始,你可以将 JSON 转换为 Spring 数据投影接口,而不是具体的类型。这允许对数据进行非常有选择性的、低耦合的绑定,包括从 JSON 文档中的多个位置查找值。例如,以下接口可以定义为消息有效负载类型: @@ -3882,7 +3882,7 @@ public void projection(SomeSample in) { 当用作`@KafkaListener`方法的参数时,接口类型将作为正常类型自动传递给转换器。 -##### [](#error-handling-deserializer)使用`ErrorHandlingDeserializer` +##### 使用`ErrorHandlingDeserializer` 当反序列化器无法对消息进行反序列化时, Spring 无法处理该问题,因为它发生在`poll()`返回之前。为了解决这个问题,引入了`ErrorHandlingDeserializer`。这个反序列化器委托给一个真正的反序列化器(键或值)。如果委托未能反序列化记录内容,则`ErrorHandlingDeserializer`在包含原因和原始字节的头文件中返回一个`null`值和一个`DeserializationException`值。当你使用一个记录级别`MessageListener`时,如果`ConsumerRecord`包含一个用于键或值的`DeserializationException`头,则使用失败的`ErrorHandler`调用容器的`ConsumerRecord`。记录不会传递给监听器。 @@ -3999,7 +3999,7 @@ void listen(List> in) { } ``` -##### [](#payload-conversion-with-batch)与批处理侦听器的有效负载转换 +##### 与批处理侦听器的有效负载转换 在使用批监听器容器工厂时,还可以在`BatchMessagingMessageConverter`中使用`JsonMessageConverter`来转换批处理消息。有关更多信息,请参见[序列化、反序列化和消息转换](#serdes)和[Spring Messaging Message Conversion](#messaging-message-conversion)。 @@ -4042,7 +4042,7 @@ public void listen1(List> fooMessages) { } ``` -##### [](#conversionservice-customization)`ConversionService`定制 +##### `ConversionService`定制 从版本 2.1.1 开始,默认`org.springframework.core.convert.ConversionService`用于解析侦听器方法调用的参数所使用的`org.springframework.core.convert.ConversionService`与实现以下任何接口的所有 bean 一起提供: @@ -4057,7 +4057,7 @@ public void listen1(List> fooMessages) { | |通过`KafkaListenerConfigurer` Bean 在`KafkaListenerEndpointRegistrar`上设置自定义的`MessageHandlerMethodFactory`将禁用此功能。| |---|------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#custom-arg-resolve)将自定义`HandlerMethodArgumentResolver`添加到`@KafkaListener` +##### 将自定义`HandlerMethodArgumentResolver`添加到`@KafkaListener` 从版本 2.4.2 开始,你可以添加自己的`HandlerMethodArgumentResolver`并解析自定义方法参数。你所需要的只是实现`KafkaListenerConfigurer`并使用来自类`setCustomMethodArgumentResolvers()`的方法`setCustomMethodArgumentResolvers()`。 @@ -4092,7 +4092,7 @@ class CustomKafkaConfig implements KafkaListenerConfigurer { 另见[“墓碑”记录的空载和日志压缩](#tombstones)。 -#### [](#headers)4.1.18。消息头 +#### 4.1.18.消息头 0.11.0.0 客户机引入了对消息中的头的支持。从版本 2.0 开始, Spring for Apache Kafka 现在支持将这些头映射到`spring-messaging``MessageHeaders`。 @@ -4223,7 +4223,7 @@ MessagingMessageConverter converter() { 如果使用 Spring 引导,它将自动配置这个转换器 Bean 到自动配置的`KafkaTemplate`中;否则你应该将这个转换器添加到模板中。 -#### [](#tombstones)4.1.19。“墓碑”记录的空载和日志压缩 +#### 4.1.19.“墓碑”记录的空载和日志压缩 当你使用[对数压缩](https://kafka.apache.org/documentation/#compaction)时,你可以发送和接收带有`null`有效负载的消息,以识别删除的密钥。 @@ -4274,11 +4274,11 @@ static class MultiListenerBean { | |此功能需要使用`KafkaNullAwarePayloadArgumentResolver`,当使用默认的`MessageHandlerMethodFactory`时,框架将对其进行配置。
当使用自定义的`MessageHandlerMethodFactory`时,请参阅[将自定义`HandlerMethodArgumentResolver`添加到`@KafkaListener`]。| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#annotation-error-handling)4.1.20。处理异常 +#### 4.1.20.处理异常 本节描述了如何处理在使用 Spring 用于 Apache Kafka 时可能出现的各种异常。 -##### [](#listener-error-handlers)侦听器错误处理程序 +##### 侦听器错误处理程序 从版本 2.0 开始,`@KafkaListener`注释有一个新属性:`errorHandler`。 @@ -4361,7 +4361,7 @@ public ConsumerAwareListenerErrorHandler listen10ErrorHandler() { | |前面的两个示例是简单的实现,你可能希望在错误处理程序中进行更多的检查。| |---|--------------------------------------------------------------------------------------------------------------------------| -##### [](#error-handlers)容器错误处理程序 +##### 容器错误处理程序 从版本 2.8 开始,遗留的`ErrorHandler`和`BatchErrorHandler`接口已被一个新的`CommonErrorHandler`所取代。这些错误处理程序可以同时处理记录和批处理侦听器的错误,从而允许单个侦听器容器工厂为这两种类型的侦听器创建容器。`CommonErrorHandler`替换大多数遗留框架错误处理程序的实现被提供,并且不推荐遗留错误处理程序。遗留接口仍然受到侦听器容器和侦听器容器工厂的支持;它们将在未来的版本中被弃用。 @@ -4402,7 +4402,7 @@ public KafkaListenerContainerFactory kafkaListenerCont 对于记录侦听器,这将重试一次交付多达 2 次(3 次交付尝试),并后退 1 秒,而不是默认配置(`FixedBackOff(0L, 9)`)。在重试结束后,只需记录失败的次数。 -例如;如果`poll`返回六条记录(每个分区 0、1、2 有两条记录),并且侦听器在第四条记录上抛出异常,则容器通过提交它们的偏移量来确认前三条消息。`DefaultErrorHandler`寻求分区 1 的偏移量 1 和分区 2 的偏移量 0。下一个`poll()`返回这三条未处理的记录。 +例如;如果`poll`返回六条记录(每个分区 0、1、2 有两条记录),并且侦听器在第四条记录上抛出异常,则容器通过提交它们的偏移量来确认前三条消息。`DefaultErrorHandler`寻求分区 1 的偏移量 1 和分区 2 的偏移量 0.下一个`poll()`返回这三条未处理的记录。 如果`AckMode`是`BATCH`,则容器在调用错误处理程序之前提交前两个分区的偏移量。 @@ -4536,7 +4536,7 @@ handler.setBackOffFunction((record, ex) -> { ... }); 另见[传递尝试标头](#delivery-header)。 -#### [](#batch-listener-conv-errors)4.1.21。使用批处理错误处理程序的转换错误 +#### 4.1.21.使用批处理错误处理程序的转换错误 从版本 2.8 开始,批处理侦听器现在可以正确处理转换错误,当使用`MessageConverter`和`ByteArrayDeserializer`、`BytesDeserializer`或`StringDeserializer`以及`DefaultErrorHandler`时。当发生转换错误时,将有效负载设置为 null,并将反序列化异常添加到记录头中,类似于`ErrorHandlingDeserializer`。侦听器中有一个`ConversionException`s 的列表可用,因此侦听器可以抛出一个`BatchListenerFailedException`,指示发生转换异常的第一个索引。 @@ -4555,7 +4555,7 @@ void listen(List in, @Header(KafkaHeaders.CONVERSION_FAILURES) List in, @Header(KafkaHeaders.CONVERSION_FAILURES) List in, @Header(KafkaHeaders.CONVERSION_FAILURES) List in, @Header(KafkaHeaders.CONVERSION_FAILURES) List`,用于发送记录。你还可以选择用`BiFunction, Exception, TopicPartition>`配置它,调用它是为了解析目标主题和分区。 @@ -4836,7 +4836,7 @@ public ErrorHandler eh(KafkaOperations template) { 从版本 2.7 开始,recoverer 将检查目标解析程序选择的分区是否确实存在。如果不存在分区,则将`ProducerRecord`中的分区设置为`null`,从而允许`KafkaProducer`选择该分区。可以通过将`verifyPartition`属性设置为`false`来禁用此检查。 -##### [](#dlpr-headers)管理死信记录头 +##### 管理死信记录头 参考上面的[发布死信记录](#dead-letters),`DeadLetterPublishingRecoverer`有两个属性,当这些头已经存在时(例如,当重新处理失败的死信记录时,包括使用[非阻塞重试](#retry-topic)时),这些属性用于管理头。 @@ -4854,7 +4854,7 @@ Apache Kafka 支持同名的多个头;要获得“latest”值,可以使用` 另见[故障报头管理](#retry-headers)与[非阻塞重试](#retry-topic)。 -##### [](#exp-backoff)`ExponentialBackOffWithMaxRetries`实现 +##### `ExponentialBackOffWithMaxRetries`实现 Spring 框架提供了许多`BackOff`实现方式。默认情况下,`ExponentialBackOff`将无限期地重试;如果要在多次重试后放弃,则需要计算`maxElapsedTime`。由于版本 2.7.3, Spring for Apache Kafka 提供了`ExponentialBackOffWithMaxRetries`,这是一个子类,它接收`maxRetries`属性并自动计算`maxElapsedTime`,这更方便一些。 @@ -4871,7 +4871,7 @@ DefaultErrorHandler handler() { 这将在`1, 2, 4, 8, 10, 10`秒后重试,然后再调用 recoverer。 -#### [](#kerberos)4.1.22。Jaas 和 Kerberos +#### 4.1.22.Jaas 和 Kerberos 从版本 2.0 开始,添加了一个`KafkaJaasLoginModuleInitializer`类来帮助 Kerberos 配置。你可以使用所需的配置将这个 Bean 添加到你的应用程序上下文中。下面的示例配置了这样的 Bean: @@ -4890,11 +4890,11 @@ public KafkaJaasLoginModuleInitializer jaasConfig() throws IOException { } ``` -### [](#streams-kafka-streams)4.2。 Apache Kafka Streams 支持 +### 4.2. Apache Kafka Streams 支持 -从版本 1.1.4 开始, Spring for Apache Kafka 为[卡夫卡溪流](https://kafka.apache.org/documentation/streams)提供了一流的支持。要在 Spring 应用程序中使用它,`kafka-streams`jar 必须存在于 Classpath 上。它是 Spring for Apache Kafka 项目的可选依赖项,并且不是通过传递方式下载的。 +从版本 1.1.4 开始, Spring for Apache Kafka 为[Kafka溪流](https://kafka.apache.org/documentation/streams)提供了一流的支持。要在 Spring 应用程序中使用它,`kafka-streams`jar 必须存在于 Classpath 上。它是 Spring for Apache Kafka 项目的可选依赖项,并且不是通过传递方式下载的。 -#### [](#basics)4.2.1。基础知识 +#### 4.2.1.基础知识 参考文献 Apache Kafka Streams 文档建议使用以下 API 的方式: @@ -4928,7 +4928,7 @@ streams.close(); | |由单个`StreamsBuilder`实例暴露给`KStream`实例的所有`KafkaStreams`实例同时启动和停止,即使它们具有不同的逻辑。,换句话说,
,由`StreamsBuilder`定义的所有流都与单个生命周期控件绑定。
一旦`KafkaStreams`实例被`streams.close()`关闭,就无法重新启动。
相反,必须创建一个新的`KafkaStreams`实例来重新启动流处理。| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#streams-spring)4.2.2。 Spring 管理 +#### 4.2.2. Spring 管理 为了简化从 Spring 应用程序上下文视角使用 Kafka 流并通过容器使用生命周期管理, Spring for Apache Kafka 引入了`StreamsBuilderFactoryBean`。这是一个`AbstractFactoryBean`实现,用于将`StreamsBuilder`单例实例公开为 Bean。下面的示例创建了这样的 Bean: @@ -4996,7 +4996,7 @@ public interface KafkaStreamsInfrastructureCustomizer { 提供了一个`CompositeKafkaStreamsInfrastructureCustomizer`,用于在需要应用多个自定义程序时。 -#### [](#streams-micrometer)4.2.3。Kafkastreams 测微仪支持 +#### 4.2.3.Kafkastreams 测微仪支持 在版本 2.5.3 中引入的,可以配置`KafkaStreamsMicrometerListener`来为工厂 Bean 管理的`KafkaStreams`对象自动注册千分表: @@ -5005,7 +5005,7 @@ streamsBuilderFactoryBean.addListener(new KafkaStreamsMicrometerListener(meterRe Collections.singletonList(new ImmutableTag("customTag", "customTagValue")))); ``` -#### [](#serde)4.2.4。流 JSON 序列化和反序列化 +#### 4.2.4.流 JSON 序列化和反序列化 对于在以 JSON 格式读取或写入主题或状态存储时序列化和反序列化数据, Spring for Apache Kafka 提供了一个`JsonSerde`实现,该实现使用 JSON,将其委托给`JsonSerializer`和`JsonDeserializer`中描述的[序列化、反序列化和消息转换](#serdes)。`JsonSerde`实现通过其构造函数(目标类型或`ObjectMapper`)提供相同的配置选项。在下面的示例中,我们使用`JsonSerde`序列化和反序列化 Kafka 流的`Cat`有效负载(只要需要实例,`JsonSerde`就可以以类似的方式使用): @@ -5024,7 +5024,7 @@ stream.through(new JsonSerde<>(MyKeyType.class) "myTypes"); ``` -#### [](#using-kafkastreambrancher)4.2.5。使用`KafkaStreamBrancher` +#### 4.2.5.使用`KafkaStreamBrancher` `KafkaStreamBrancher`类引入了一种在`KStream`之上构建条件分支的更方便的方法。 @@ -5053,7 +5053,7 @@ new KafkaStreamBrancher() //onTopOf method returns the provided stream so we can continue with method chaining ``` -#### [](#streams-config)4.2.6。配置 +#### 4.2.6.配置 要配置 Kafka Streams 环境,`StreamsBuilderFactoryBean`需要一个`KafkaStreamsConfiguration`实例。有关所有可能的选项,请参见 Apache kafka[文件](https://kafka.apache.org/0102/documentation/#streamsconfigs)。 @@ -5064,7 +5064,7 @@ new KafkaStreamBrancher() 默认情况下,当工厂 Bean 停止时,将调用`KafkaStreams.cleanUp()`方法。从版本 2.1.2 开始,工厂 Bean 有额外的构造函数,接受一个`CleanupConfig`对象,该对象具有属性,可以让你控制在`cleanUp()`或`stop()`期间是否调用`cleanUp()`方法。从版本 2.7 开始,默认情况是永远不清理本地状态。 -#### [](#streams-header-enricher)4.2.7。页眉 Enricher +#### 4.2.7.页眉 Enricher 版本 2.3 增加了`HeaderEnricher`的`Transformer`实现。这可用于在流处理中添加头;头的值是 SPEL 表达式;表达式求值的根对象具有 3 个属性: @@ -5105,7 +5105,7 @@ stream .to(OUTPUT); ``` -#### [](#streams-messaging)4.2.8。`MessagingTransformer` +#### 4.2.8.`MessagingTransformer` 版本 2.3 增加了`MessagingTransformer`,这允许 Kafka Streams 拓扑与 Spring 消息传递组件进行交互,例如 Spring 集成流。转换器要求实现`MessagingFunction`。 @@ -5120,7 +5120,7 @@ public interface MessagingFunction { Spring 集成自动提供了一种使用其`GatewayProxyFactoryBean`的实现方式。它还需要一个`MessagingMessageConverter`来将键、值和元数据(包括头)转换为/来自 Spring 消息传递`Message`。参见[[从`KStream`调用 Spring 集成流](https://DOCS. Spring.io/ Spring-integration/DOCS/current/reference/html/kafka.html#Streams-integration)]以获得更多信息。 -#### [](#streams-deser-recovery)4.2.9。从反序列化异常恢复 +#### 4.2.9.从反序列化异常恢复 版本 2.3 引入了`RecoveringDeserializationExceptionHandler`,它可以在发生反序列化异常时采取一些操作。请参考关于`DeserializationExceptionHandler`的 Kafka 文档,其中`RecoveringDeserializationExceptionHandler`是一个实现。`RecoveringDeserializationExceptionHandler`配置为`ConsumerRecordRecoverer`实现。该框架提供了`DeadLetterPublishingRecoverer`,它将失败的记录发送到死信主题。有关此回收器的更多信息,请参见[发布死信记录](#dead-letters)。 @@ -5147,7 +5147,7 @@ public DeadLetterPublishingRecoverer recoverer() { 当然,`recoverer()` Bean 可以是你自己的`ConsumerRecordRecoverer`的实现。 -#### [](#kafka-streams-example)4.2.10。Kafka Streams 示例 +#### 4.2.10.Kafka Streams 示例 下面的示例结合了我们在本章中讨论的所有主题: @@ -5197,15 +5197,15 @@ public static class KafkaStreamsConfig { } ``` -### [](#testing)4.3。测试应用程序 +### 4.3.测试应用程序 `spring-kafka-test`JAR 包含一些有用的实用程序,以帮助测试你的应用程序。 -#### [](#ktu)4.3.1。Kafkatestutils +#### 4.3.1.Kafkatestutils `o.s.kafka.test.utils.KafkaTestUtils`提供了许多静态助手方法来使用记录、检索各种记录偏移量以及其他方法。有关完整的详细信息,请参阅其[Javadocs](https://docs.spring.io/spring-kafka/docs/current/api/org/springframework/kafka/test/utils/KafkaTestUtils.html)。 -#### [](#junit)4.3.2。朱尼特 +#### 4.3.2.朱尼特 `o.s.kafka.test.utils.KafkaTestUtils`还提供了一些静态方法来设置生产者和消费者属性。下面的清单显示了这些方法签名: @@ -5294,9 +5294,9 @@ ConsumerRecord received = KafkaTestUtils.getSingleRecord(consum 当`EmbeddedKafkaBroker`启动嵌入式 Kafka 和嵌入式 ZooKeeper 服务器时,将名为`spring.embedded.kafka.brokers`的系统属性设置为 Kafka 代理的地址,并将名为`spring.embedded.zookeeper.connect`的系统属性设置为 ZooKeeper 的地址。为此属性提供了方便的常量(`EmbeddedKafkaBroker.SPRING_EMBEDDED_KAFKA_BROKERS`和`EmbeddedKafkaBroker.SPRING_EMBEDDED_ZOOKEEPER_CONNECT`)。 -使用`EmbeddedKafkaBroker.brokerProperties(Map)`,你可以为 Kafka 服务器提供其他属性。有关可能的代理属性的更多信息,请参见[卡夫卡配置](https://kafka.apache.org/documentation/#brokerconfigs)。 +使用`EmbeddedKafkaBroker.brokerProperties(Map)`,你可以为 Kafka 服务器提供其他属性。有关可能的代理属性的更多信息,请参见[Kafka配置](https://kafka.apache.org/documentation/#brokerconfigs)。 -#### [](#configuring-topics-2)4.3.3。配置主题 +#### 4.3.3.配置主题 下面的示例配置创建了带有五个分区的`cat`和`hat`主题,带有 10 个分区的`thing1`主题,以及带有 15 个分区的`thing2`主题: @@ -5318,7 +5318,7 @@ public class MyTests { 默认情况下,`addTopics`在出现问题(例如添加已经存在的主题)时将抛出异常。版本 2.6 添加了该方法的新版本,该版本返回`Map`;关键是主题名称,对于成功,值是`null`,对于失败,值是`Exception`。 -#### [](#using-the-same-brokers-for-multiple-test-classes)4.3.4。对多个测试类使用相同的代理 +#### 4.3.4.对多个测试类使用相同的代理 这样做并没有内置的支持,但是你可以使用相同的代理对多个测试类进行类似于以下的操作: @@ -5367,7 +5367,7 @@ private static final EmbeddedKafkaBroker broker = EmbeddedKafkaHolder.getEmbedde | |前面的示例没有提供在所有测试完成后关闭代理的机制,
如果你在 Gradle 守护程序中运行测试,这可能是个问题,
在这种情况下,你不应该使用这种技术,或者,当测试完成时,你应该在`EmbeddedKafkaBroker`上使用调用`destroy()`的方法。| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#embedded-kafka-annotation)4.3.5。@Embeddedkafka 注释 +#### 4.3.5.@Embeddedkafka 注释 我们通常建议你使用`@ClassRule`规则,以避免在测试之间启动和停止代理(并为每个测试使用不同的主题)。从版本 2.0 开始,如果使用 Spring 的测试应用程序上下文缓存,还可以声明`EmbeddedKafkaBroker` Bean,因此单个代理可以跨多个测试类使用。为了方便起见,我们提供了一个名为`@EmbeddedKafka`的测试类级注释来注册`EmbeddedKafkaBroker` Bean。下面的示例展示了如何使用它: @@ -5431,7 +5431,7 @@ public class KafkaStreamsTests { 你可以在 JUnit4 或 JUnit5 中使用`@EmbeddedKafka`注释。 -#### [](#embedded-kafka-junit5)4.3.6。@EmbeddedKafka 注释与 JUnit5 +#### 4.3.6.@EmbeddedKafka 注释与 JUnit5 从版本 2.3 开始,有两种方法可以使用 JUnit5 的`@EmbeddedKafka`注释。当与`@SpringJunitConfig`注释一起使用时,嵌入式代理将添加到测试应用程序上下文中。你可以在类或方法级别将代理自动连接到你的测试中,以获得代理地址列表。 @@ -5455,7 +5455,7 @@ public class EmbeddedKafkaConditionTests { | |当有 Spring 可用的测试应用程序上下文时,topics 和 broker 属性可以包含属性占位符,只要在某个地方定义了属性,这些占位符就会被解析。
如果没有 Spring 可用的上下文,这些占位符就不会被解析。| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#embedded-broker-in-springboottest-annotations)4.3.7。`@SpringBootTest`注释中的嵌入式代理 +#### 4.3.7.`@SpringBootTest`注释中的嵌入式代理 [Spring Initializr](https://start.spring.io/)现在自动将测试范围中的`spring-kafka-test`依赖项添加到项目配置中。 @@ -5470,7 +5470,7 @@ public class EmbeddedKafkaConditionTests { * [`@EmbeddedKafka`注释或`EmbeddedKafkaBroker` Bean(#kafka-testing-embeddedkafka-annotation) -##### [](#kafka-testing-junit4-class-rule)JUnit4 类规则 +##### JUnit4 类规则 下面的示例展示了如何使用 JUnit4 类规则来创建嵌入式代理: @@ -5498,7 +5498,7 @@ public class MyApplicationTests { 注意,由于这是一个 Spring 引导应用程序,因此我们将覆盖代理列表属性以设置引导属性。 -##### [](#kafka-testing-embeddedkafka-annotation)`@EmbeddedKafka`注释或`EmbeddedKafkaBroker` Bean +##### `@EmbeddedKafka`注释或`EmbeddedKafkaBroker` Bean 下面的示例展示了如何使用`@EmbeddedKafka`注释来创建嵌入式代理: @@ -5519,7 +5519,7 @@ public class MyApplicationTests { } ``` -#### [](#hamcrest-matchers)4.3.8。汉克雷斯特火柴人 +#### 4.3.8.汉克雷斯特火柴人 `o.s.kafka.test.hamcrest.KafkaMatchers`提供了以下匹配器: @@ -5566,7 +5566,7 @@ public static Matcher> hasTimestamp(TimestampType type, lon } ``` -#### [](#assertj-conditions)4.3.9。AssertJ 条件 +#### 4.3.9.AssertJ 条件 你可以使用以下 AssertJ 条件: @@ -5619,7 +5619,7 @@ public static Condition> timestamp(TimestampType type, long } ``` -#### [](#example)4.3.10。例子 +#### 4.3.10.例子 下面的示例汇总了本章涵盖的大多数主题: @@ -5693,20 +5693,20 @@ received = records.poll(10, TimeUnit.SECONDS); assertThat(received).has(allOf(keyValue(2, "baz"), partition(0))); ``` -### [](#retry-topic)4.4。非阻塞重试 +### 4.4.非阻塞重试 | |这是一个实验性的功能,通常的不中断 API 更改的规则不适用于此功能,直到删除了实验性的指定。
鼓励用户尝试该功能并通过 GitHub 问题或 GitHub 讨论提供反馈。
这仅与 API 有关;该功能被认为是完整且健壮的。| |---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| 使用 Kafka 实现非阻塞重试/DLT 功能通常需要设置额外的主题并创建和配置相应的侦听器。由于 2.7 Spring for Apache,Kafka 通过`@RetryableTopic`注释和`RetryTopicConfiguration`类提供了对此的支持,以简化该引导。 -#### [](#how-the-pattern-works)4.4.1。模式的工作原理 +#### 4.4.1.模式的工作原理 如果消息处理失败,该消息将被转发到带有后退时间戳的重试主题。然后,重试主题使用者检查时间戳,如果没有到期,它会暂停该主题分区的消耗。当它到期时,将恢复分区消耗,并再次消耗消息。如果消息处理再次失败,则消息将被转发到下一个重试主题,并重复该模式,直到处理成功,或者尝试已尽,并将消息发送到死信主题(如果已配置)。 为了说明这一点,如果你有一个“main-topic”主题,并且希望设置非阻塞重试,该重试的指数回退为 1000ms,乘数为 2 和 4max,那么它将创建 main-topic-retry-1000、main-topic-retry-2000、main-topic-retry-4000 和 main-topic-dlt 主题,并配置相应的消费者。该框架还负责创建主题以及设置和配置侦听器。 -| |通过使用这种策略,你将失去卡夫卡对该主题的排序保证。| +| |通过使用这种策略,你将失去Kafka对该主题的排序保证。| |---|---------------------------------------------------------------------------| | |你可以设置你喜欢的`AckMode`模式,但建议使用`RECORD`模式。| @@ -5715,9 +5715,9 @@ assertThat(received).has(allOf(keyValue(2, "baz"), partition(0))); | |目前,此功能不支持类级别`@KafkaListener`注释| |---|----------------------------------------------------------------------------------------| -#### [](#back-off-delay-precision)4.4.2。退后延迟精度 +#### 4.4.2.退后延迟精度 -##### [](#overview-and-guarantees)概述和保证 +##### 概述和保证 所有的消息处理和退线都由使用者线程处理,因此,在尽力而为的基础上保证了延迟精度。如果一条消息的处理时间超过了下一条消息对该消费者的回退期,则下一条消息的延迟将高于预期。此外,对于较短的延迟(大约 1s 或更短),线程必须进行的维护工作(例如提交偏移)可能会延迟消息处理的执行。如果重试主题的使用者正在处理多个分区,则精度也会受到影响,因为我们依赖于从轮询中唤醒使用者并具有完整的 polltimeouts 来进行时间调整。 @@ -5726,16 +5726,16 @@ assertThat(received).has(allOf(keyValue(2, "baz"), partition(0))); | |保证一条消息在到期前永远不会被处理。| |---|----------------------------------------------------------------------------| -##### [](#tuning-the-delay-precision)调整延迟精度 +##### 调整延迟精度 消息的处理延迟精度依赖于两个`ContainerProperties`:`ContainerProperties.pollTimeout`和`ContainerProperties.idlePartitionEventInterval`。这两个属性将在重试主题和 DLT 的`ListenerContainerFactory`中自动设置为该主题最小延迟值的四分之一,最小值为 250ms,最大值为 5000ms。只有当属性有其默认值时,才会设置这些值-如果你自己更改其中一个值,你的更改将不会被重写。通过这种方式,你可以根据需要调整重试主题的精度和性能。 | |你可以为 main 和 retry 主题设置单独的`ListenerContainerFactory`实例-这样你就可以设置不同的设置,以更好地满足你的需求,例如,为 main 主题设置更高的轮询超时设置,为 retry 主题设置更低的轮询超时设置。| |---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -#### [](#configuration)4.4.3。配置 +#### 4.4.3.配置 -##### [](#using-the-retryabletopic-annotation)使用`@RetryableTopic`注释 +##### 使用`@RetryableTopic`注释 要为`@KafkaListener`注释方法配置重试主题和 DLT,只需向其添加`@RetryableTopic`注释,而 Spring 对于 Apache Kafka 将使用默认配置引导所有必要的主题和使用者。 @@ -5759,7 +5759,7 @@ public void processMessage(MyPojo message) { | |如果你没有指定 Kafkatemplate 名称,则将查找名称为`retryTopicDefaultKafkaTemplate`的 Bean。
如果没有找到 Bean,则抛出异常。| |---|--------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#using-retrytopicconfiguration-beans)使用`RetryTopicConfiguration`bean +##### 使用`RetryTopicConfiguration`bean 你还可以通过在`@Configuration`带注释的类中创建`RetryTopicConfiguration`bean 来配置非阻塞重试支持。 @@ -5819,11 +5819,11 @@ public KafkaTemplate kafkaTemplate() { } ``` -#### [](#features)4.4.4。特征 +#### 4.4.4.特征 大多数特性都适用于`@RetryableTopic`注释和`RetryTopicConfiguration`bean。 -##### [](#backoff-configuration)退避配置 +##### 退避配置 退避配置依赖于`Spring Retry`项目中的`BackOffPolicy`接口。 @@ -5880,7 +5880,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |第一次尝试与 maxtripts 相对应,因此,如果你提供的 maxtripes 值为 4,那么将出现原始尝试加 3 次重试。| |---|---------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#single-topic-fixed-delay-retries)单话题固定延迟重试 +##### 单话题固定延迟重试 如果你使用固定的延迟策略,例如`FixedBackOffPolicy`或`NoBackOffPolicy`,你可以使用一个主题来完成非阻塞重试。此主题将使用提供的或默认的后缀作为后缀,并且不会附加索引或延迟值。 @@ -5907,7 +5907,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |默认的行为是为每次尝试创建单独的重试主题,并附上它们的索引值:retry-0、retry-1、…| |---|------------------------------------------------------------------------------------------------------------------------------| -##### [](#global-timeout)全局超时 +##### 全局超时 你可以为重试过程设置全局超时。如果达到了这个时间,则下一次使用者抛出异常时,消息将直接传递到 DLT,或者如果没有可用的 DLT,消息将结束处理。 @@ -5933,7 +5933,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |默认值是没有超时设置的,这也可以通过提供-1 作为超时值来实现。| |---|-----------------------------------------------------------------------------------------------------| -##### [](#retry-topic-ex-classifier)异常分类器 +##### 异常分类器 你可以指定要重试的异常和不要重试的异常。你还可以将其设置为遍历原因以查找嵌套的异常。 @@ -5975,7 +5975,7 @@ public DefaultDestinationTopicResolver topicResolver(ApplicationContext applicat | |要禁用致命异常的分类,请使用`setClassifications`中的`DefaultDestinationTopicResolver`方法清除默认列表。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#include-and-exclude-topics)包含和排除主题 +##### 包含和排除主题 你可以通过.includeTopic(字符串主题)、.includeTopics(集合 \主题)、.excludeTopic(字符串主题)和.excludeTopics(集合 \主题)方法来决定哪些主题将由 Bean 处理。 @@ -6000,7 +6000,7 @@ public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate | |默认的行为是包含所有主题。| |---|----------------------------------------------| -##### [](#topics-autocreation)topics 自动创建 +##### topics 自动创建 除非另有说明,否则框架将使用`NewTopic`bean 自动创建所需的主题,这些 bean 由`KafkaAdmin` Bean 使用。你可以指定创建主题所使用的分区数量和复制因子,并且可以关闭此功能。 @@ -6042,7 +6042,7 @@ public RetryTopicConfiguration myOtherRetryTopic(KafkaTemplate | |默认情况下,主题是用一个分区和一个复制因子自动创建的。| |---|-----------------------------------------------------------------------------------------| -##### [](#retry-headers)故障报头管理 +##### 故障报头管理 在考虑如何管理故障报头(原始报头和异常报头)时,框架将委托给`DeadLetterPublishingRecover`,以决定是否追加或替换报头。 @@ -6066,7 +6066,7 @@ DeadLetterPublishingRecovererFactory factory(DestinationTopicResolver resolver) } ``` -#### [](#topic-naming)4.4.5。主题命名 +#### 4.4.5.主题命名 Retry Topics 和 DLT 的命名方法是使用提供的或默认值对主主题进行后缀,并附加该主题的延迟或索引。 @@ -6076,7 +6076,7 @@ Retry Topics 和 DLT 的命名方法是使用提供的或默认值对主主题 “my-other-topic”“my-topic-myretrySuffix-1000”,“my-topic-myretrySuffix-2000”,…,“my-topic-mydltSufix”。 -##### [](#retry-topics-and-dlt-suffixes)重试主题和 DLT 后缀 +##### 重试主题和 DLT 后缀 你可以指定 Retry 和 DLT 主题将使用的后缀。 @@ -6102,7 +6102,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate t | |默认后缀是“-retry”和“-dlt”,分别用于重试主题和 DLT。| |---|------------------------------------------------------------------------------------| -##### [](#appending-the-topics-index-or-delay)附加主题索引或延迟 +##### 附加主题索引或延迟 你可以在后缀之后追加主题的索引值,也可以在后缀之后追加延迟值。 @@ -6127,7 +6127,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templa | |默认的行为是使用延迟值作为后缀,除了具有多个主题的固定延迟配置,在这种情况下,主题以主题的索引作为后缀。| |---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -##### [](#custom-naming-strategies)自定义命名策略 +##### 自定义命名策略 可以通过注册实现`RetryTopicNamesProviderFactory`的 Bean 来实现更复杂的命名策略。默认实现是`SuffixingRetryTopicNamesProviderFactory`,可以通过以下方式注册不同的实现: @@ -6165,11 +6165,11 @@ public class CustomRetryTopicNamesProviderFactory implements RetryTopicNamesProv } ``` -#### [](#dlt-strategies)4.4.6。DLT 策略 +#### 4.4.6.DLT 策略 该框架为使用 DLTS 提供了一些策略。你可以提供用于 DLT 处理的方法,也可以使用默认的日志记录方法,或者根本没有 DLT。你还可以选择如果 DLT 处理失败会发生什么。 -##### [](#dlt-processing-method)DLT 处理方法 +##### DLT 处理方法 你可以指定用于处理该主题的 DLT 的方法,以及在处理失败时的行为。 @@ -6223,7 +6223,7 @@ public class MyCustomDltProcessor { 稍后可以通过`KafkaListenerEndpointRegistry`启动 DLT 处理程序。 -##### [](#dlt-failure-behavior)DLT 故障行为 +##### DLT 故障行为 如果 DLT 处理失败,有两种可能的行为可用:`ALWAYS_RETRY_ON_ERROR`和`FAIL_ON_ERROR`。 @@ -6273,7 +6273,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templ 有关更多信息,请参见[异常分类器](#retry-topic-ex-classifier)。 -##### [](#configuring-no-dlt)配置无 dlt +##### 配置无 dlt 该框架还提供了不为主题配置 DLT 的可能性。在这种情况下,在重审用尽之后,程序就结束了。 @@ -6296,7 +6296,7 @@ public RetryTopicConfiguration myRetryTopic(KafkaTemplate templ } ``` -#### [](#retry-topic-lcf)4.4.7。指定 ListenerContainerFactory +#### 4.4.7.指定 ListenerContainerFactory 默认情况下,RetryTopic 配置将使用`@KafkaListener`注释中提供的工厂,但是你可以指定一个不同的工厂来创建 RetryTopic 和 DLT 侦听器容器。 @@ -6552,7 +6552,7 @@ public class CustomJsonSerializer extends JsonSerializer { 当在 Spring 引导应用程序中对 Apache Kafka 使用 Spring 时, Apache Kafka 依赖关系版本由 Spring 引导的依赖关系管理确定。如果希望使用不同版本的`kafka-clients`或`kafka-streams`,并使用嵌入式 Kafka 代理进行测试,则需要覆盖 Spring 引导依赖项管理使用的版本,并为 Apache Kafka 添加两个`test`工件。 -| |在 Microsoft Windows 上运行嵌入式代理时, Apache Kafka3.0.0 中存在一个 bug[卡夫卡-13391](https://issues.apache.org/jira/browse/KAFKA-13391)。
要在 Windows 上使用嵌入式代理,需要将 Apache Kafka 版本降级到 2.8.1,直到 3.0.1 可用。
使用 2.8.1 时,你还需要从`spring-kafka-test`中排除`zookeeper`依赖关系。| +| |在 Microsoft Windows 上运行嵌入式代理时, Apache Kafka3.0.0 中存在一个 bug[Kafka-13391](https://issues.apache.org/jira/browse/KAFKA-13391)。
要在 Windows 上使用嵌入式代理,需要将 Apache Kafka 版本降级到 2.8.1,直到 3.0.1 可用。
使用 2.8.1 时,你还需要从`spring-kafka-test`中排除`zookeeper`依赖关系。| |---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Maven @@ -6851,7 +6851,7 @@ a`RoutingKafkaTemplate`现已提供。有关更多信息,请参见[使用`Rout 现在可以将批处理侦听器配置为`BatchToRecordAdapter`;例如,这允许在一个事务中处理批处理,而侦听器一次获取一条记录。对于默认实现,可以使用`ConsumerRecordRecoverer`来处理批处理中的错误,而不会停止整个批处理的处理-这在使用事务时可能很有用。有关更多信息,请参见[具有批处理侦听器的事务](#transactions-batch)。 -\==== 卡夫卡流 +\==== Kafka流 `StreamsBuilderFactoryBean`接受一个新属性`KafkaStreamsInfrastructureCustomizer`。这允许在创建流之前配置构建器和/或拓扑。有关更多信息,请参见[Spring Management](#streams-spring)。 @@ -7021,7 +7021,7 @@ a`RoutingKafkaTemplate`现已提供。有关更多信息,请参见[使用`Rout 此外,`DefaultKafkaHeaderMapper`有一个新的`addToStringClasses`方法,允许使用`toString()`而不是 JSON 来映射类型的规范。有关更多信息,请参见[消息头](#headers)。 -\==== 嵌入式卡夫卡更改 +\==== 嵌入式Kafka更改 `KafkaEmbedded`类及其`KafkaRule`接口已被弃用,而支持`EmbeddedKafkaBroker`及其 JUnit4`EmbeddedKafkaRule`包装器。现在,`@EmbeddedKafka`注释填充`EmbeddedKafkaBroker` Bean,而不是不受欢迎的`KafkaEmbedded`。此更改允许在 JUnit5 测试中使用`@EmbeddedKafka`。`@EmbeddedKafka`注释现在具有属性`ports`来指定填充`EmbeddedKafkaBroker`的端口。有关更多信息,请参见[测试应用程序](#testing)。 @@ -7103,7 +7103,7 @@ a`RoutingKafkaTemplate`现已提供。有关更多信息,请参见[使用`Rout \==== Spring 框架和 Java 版本 -Spring for Apache Kafka 项目现在需要 Spring Framework5.0 和 Java8。 +Spring for Apache Kafka 项目现在需要 Spring Framework5.0 和 Java8. \====`@KafkaListener`变化 @@ -7155,7 +7155,7 @@ Spring for Apache Kafka 项目现在需要 Spring Framework5.0 和 Java8。 \===1.0 到 1.1 之间的变化 -\==== 卡夫卡客户端 +\==== Kafka客户端 此版本使用 Apache Kafka0.10.x.x 客户端。