listen(String in) {
+ ...
+ return MessageBuilder.withPayload(in.toUpperCase())
+ .setHeader(MessageHeaders.CONTENT_TYPE, "application/xml")
+ .build();
+}
+```
+
+This content type will be passed in the `MessageProperties` to the converter.
+By default, for backwards compatibility, any content type property set by the converter will be overwritten by this value after conversion.
+If you wish to override that behavior, also set the `AmqpHeaders.CONTENT_TYPE_CONVERTER_WINS` to `true` and any value set by the converter will be retained.
+
+###### Multi-method Listeners
+
+Starting with version 1.5.0, you can specify the `@RabbitListener` annotation at the class level.
+Together with the new `@RabbitHandler` annotation, this lets a single listener invoke different methods, based on
+the payload type of the incoming message.
+This is best described using an example:
+
+```
+@RabbitListener(id="multi", queues = "someQueue")
+@SendTo("my.reply.queue")
+public class MultiListenerBean {
+
+ @RabbitHandler
+ public String thing2(Thing2 thing2) {
+ ...
+ }
+
+ @RabbitHandler
+ public String cat(Cat cat) {
+ ...
+ }
+
+ @RabbitHandler
+ public String hat(@Header("amqp_receivedRoutingKey") String rk, @Payload Hat hat) {
+ ...
+ }
+
+ @RabbitHandler(isDefault = true)
+ public String defaultMethod(Object object) {
+ ...
+ }
+
+}
+```
+
+In this case, the individual `@RabbitHandler` methods are invoked if the converted payload is a `Thing2`, a `Cat`, or a `Hat`.
+You should understand that the system must be able to identify a unique method based on the payload type.
+The type is checked for assignability to a single parameter that has no annotations or that is annotated with the `@Payload` annotation.
+Notice that the same method signatures apply, as discussed in the method-level `@RabbitListener` ([described earlier](#message-listener-adapter)).
+
+Starting with version 2.0.3, a `@RabbitHandler` method can be designated as the default method, which is invoked if there is no match on other methods.
+At most, one method can be so designated.
+
+| |`@RabbitHandler` is intended only for processing message payloads after conversion, if you wish to receive the unconverted raw `Message` object, you must use `@RabbitListener` on the method, not the class.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+###### `@Repeatable` `@RabbitListener`
+
+Starting with version 1.6, the `@RabbitListener` annotation is marked with `@Repeatable`.
+This means that the annotation can appear on the same annotated element (method or class) multiple times.
+In this case, a separate listener container is created for each annotation, each of which invokes the same listener`@Bean`.
+Repeatable annotations can be used with Java 8 or above.
+
+###### Proxy `@RabbitListener` and Generics
+
+If your service is intended to be proxied (for example, in the case of `@Transactional`), you should keep in mind some considerations when
+the interface has generic parameters.
+Consider the following example:
+
+```
+interface TxService {
+
+ String handle(P payload, String header);
+
+}
+
+static class TxServiceImpl implements TxService {
+
+ @Override
+ @RabbitListener(...)
+ public String handle(Thing thing, String rk) {
+ ...
+ }
+
+}
+```
+
+With a generic interface and a particular implementation, you are forced to switch to the CGLIB target class proxy because the actual implementation of the interface`handle` method is a bridge method.
+In the case of transaction management, the use of CGLIB is configured by using
+an annotation option: `@EnableTransactionManagement(proxyTargetClass = true)`.
+And in this case, all annotations have to be declared on the target method in the implementation, as the following example shows:
+
+```
+static class TxServiceImpl implements TxService {
+
+ @Override
+ @Transactional
+ @RabbitListener(...)
+ public String handle(@Payload Foo foo, @Header("amqp_receivedRoutingKey") String rk) {
+ ...
+ }
+
+}
+```
+
+###### Handling Exceptions
+
+By default, if an annotated listener method throws an exception, it is thrown to the container and the message are requeued and redelivered, discarded, or routed to a dead letter exchange, depending on the container and broker configuration.
+Nothing is returned to the sender.
+
+Starting with version 2.0, the `@RabbitListener` annotation has two new attributes: `errorHandler` and `returnExceptions`.
+
+These are not configured by default.
+
+You can use the `errorHandler` to provide the bean name of a `RabbitListenerErrorHandler` implementation.
+This functional interface has one method, as follows:
+
+```
+@FunctionalInterface
+public interface RabbitListenerErrorHandler {
+
+ Object handleError(Message amqpMessage, org.springframework.messaging.Message> message,
+ ListenerExecutionFailedException exception) throws Exception;
+
+}
+```
+
+As you can see, you have access to the raw message received from the container, the spring-messaging `Message>` object produced by the message converter, and the exception that was thrown by the listener (wrapped in a `ListenerExecutionFailedException`).
+The error handler can either return some result (which is sent as the reply) or throw the original or a new exception (which is thrown to the container or returned to the sender, depending on the `returnExceptions` setting).
+
+The `returnExceptions` attribute, when `true`, causes exceptions to be returned to the sender.
+The exception is wrapped in a `RemoteInvocationResult` object.
+On the sender side, there is an available `RemoteInvocationAwareMessageConverterAdapter`, which, if configured into the `RabbitTemplate`, re-throws the server-side exception, wrapped in an `AmqpRemoteException`.
+The stack trace of the server exception is synthesized by merging the server and client stack traces.
+
+| |This mechanism generally works only with the default `SimpleMessageConverter`, which uses Java serialization. Exceptions are generally not “Jackson-friendly” and cannot be serialized to JSON. If you use JSON, consider using an `errorHandler` to return some other Jackson-friendly `Error` object when an exception is thrown.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |In version 2.1, this interface moved from package `o.s.amqp.rabbit.listener` to `o.s.amqp.rabbit.listener.api`.|
+|---|---------------------------------------------------------------------------------------------------------------|
+
+Starting with version 2.1.7, the `Channel` is available in a messaging message header; this allows you to ack or nack the failed messasge when using `AcknowledgeMode.MANUAL`:
+
+```
+public Object handleError(Message amqpMessage, org.springframework.messaging.Message> message,
+ ListenerExecutionFailedException exception) {
+ ...
+ message.getHeaders().get(AmqpHeaders.CHANNEL, Channel.class)
+ .basicReject(message.getHeaders().get(AmqpHeaders.DELIVERY_TAG, Long.class),
+ true);
+ }
+```
+
+Starting with version 2.2.18, if a message conversion exception is thrown, the error handler will be called, with `null` in the `message` argument.
+This allows the application to send some result to the caller, indicating that a badly-formed message was received.
+Previously, such errors were thrown and handled by the container.
+
+###### Container Management
+
+Containers created for annotations are not registered with the application context.
+You can obtain a collection of all containers by invoking `getListenerContainers()` on the`RabbitListenerEndpointRegistry` bean.
+You can then iterate over this collection, for example, to stop or start all containers or invoke the `Lifecycle` methods
+on the registry itself, which will invoke the operations on each container.
+
+You can also get a reference to an individual container by using its `id`, using `getListenerContainer(String id)` — for
+example, `registry.getListenerContainer("multi")` for the container created by the snippet above.
+
+Starting with version 1.5.2, you can obtain the `id` values of the registered containers with `getListenerContainerIds()`.
+
+Starting with version 1.5, you can now assign a `group` to the container on the `RabbitListener` endpoint.
+This provides a mechanism to get a reference to a subset of containers.
+Adding a `group` attribute causes a bean of type `Collection` to be registered with the context with the group name.
+
+##### @RabbitListener with Batching
+
+When receiving a [a batch](#template-batching) of messages, the de-batching is normally performed by the container and the listener is invoked with one message at at time.
+Starting with version 2.2, you can configure the listener container factory and listener to receive the entire batch in one call, simply set the factory’s `batchListener` property, and make the method payload parameter a `List`:
+
+```
+@Bean
+public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(connectionFactory());
+ factory.setBatchListener(true);
+ return factory;
+}
+
+@RabbitListener(queues = "batch.1")
+public void listen1(List in) {
+ ...
+}
+
+// or
+
+@RabbitListener(queues = "batch.2")
+public void listen2(List> in) {
+ ...
+}
+```
+
+Setting the `batchListener` property to true automatically turns off the `deBatchingEnabled` container property in containers that the factory creates (unless `consumerBatchEnabled` is `true` - see below). Effectively, the debatching is moved from the container to the listener adapter and the adapter creates the list that is passed to the listener.
+
+A batch-enabled factory cannot be used with a [multi-method listener](#annotation-method-selection).
+
+Also starting with version 2.2. when receiving batched messages one-at-a-time, the last message contains a boolean header set to `true`.
+This header can be obtained by adding the `@Header(AmqpHeaders.LAST_IN_BATCH)` boolean last` parameter to your listener method.
+The header is mapped from `MessageProperties.isLastInBatch()`.
+In addition, `AmqpHeaders.BATCH_SIZE` is populated with the size of the batch in every message fragment.
+
+In addition, a new property `consumerBatchEnabled` has been added to the `SimpleMessageListenerContainer`.
+When this is true, the container will create a batch of messages, up to `batchSize`; a partial batch is delivered if `receiveTimeout` elapses with no new messages arriving.
+If a producer-created batch is received, it is debatched and added to the consumer-side batch; therefore the actual number of messages delivered may exceed `batchSize`, which represents the number of messages received from the broker.`deBatchingEnabled` must be true when `consumerBatchEnabled` is true; the container factory will enforce this requirement.
+
+```
+@Bean
+public SimpleRabbitListenerContainerFactory consumerBatchContainerFactory() {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(rabbitConnectionFactory());
+ factory.setConsumerTagStrategy(consumerTagStrategy());
+ factory.setBatchListener(true); // configures a BatchMessageListenerAdapter
+ factory.setBatchSize(2);
+ factory.setConsumerBatchEnabled(true);
+ return factory;
+}
+```
+
+When using `consumerBatchEnabled` with `@RabbitListener`:
+
+```
+@RabbitListener(queues = "batch.1", containerFactory = "consumerBatchContainerFactory")
+public void consumerBatch1(List amqpMessages) {
+ this.amqpMessagesReceived = amqpMessages;
+ this.batch1Latch.countDown();
+}
+
+@RabbitListener(queues = "batch.2", containerFactory = "consumerBatchContainerFactory")
+public void consumerBatch2(List> messages) {
+ this.messagingMessagesReceived = messages;
+ this.batch2Latch.countDown();
+}
+
+@RabbitListener(queues = "batch.3", containerFactory = "consumerBatchContainerFactory")
+public void consumerBatch3(List strings) {
+ this.batch3Strings = strings;
+ this.batch3Latch.countDown();
+}
+```
+
+* the first is called with the raw, unconverted `org.springframework.amqp.core.Message` s received.
+
+* the second is called with the `org.springframework.messaging.Message>` s with converted payloads and mapped headers/properties.
+
+* the third is called with the converted payloads, with no access to headers/properteis.
+
+You can also add a `Channel` parameter, often used when using `MANUAL` ack mode.
+This is not very useful with the third example because you don’t have access to the `delivery_tag` property.
+
+##### Using Container Factories
+
+Listener container factories were introduced to support the `@RabbitListener` and registering containers with the `RabbitListenerEndpointRegistry`, as discussed in [Programmatic Endpoint Registration](#async-annotation-driven-registration).
+
+Starting with version 2.1, they can be used to create any listener container — even a container without a listener (such as for use in Spring Integration).
+Of course, a listener must be added before the container is started.
+
+There are two ways to create such containers:
+
+* Use a SimpleRabbitListenerEndpoint
+
+* Add the listener after creation
+
+The following example shows how to use a `SimpleRabbitListenerEndpoint` to create a listener container:
+
+```
+@Bean
+public SimpleMessageListenerContainer factoryCreatedContainerSimpleListener(
+ SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory) {
+ SimpleRabbitListenerEndpoint endpoint = new SimpleRabbitListenerEndpoint();
+ endpoint.setQueueNames("queue.1");
+ endpoint.setMessageListener(message -> {
+ ...
+ });
+ return rabbitListenerContainerFactory.createListenerContainer(endpoint);
+}
+```
+
+The following example shows how to add the listener after creation:
+
+```
+@Bean
+public SimpleMessageListenerContainer factoryCreatedContainerNoListener(
+ SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory) {
+ SimpleMessageListenerContainer container = rabbitListenerContainerFactory.createListenerContainer();
+ container.setMessageListener(message -> {
+ ...
+ });
+ container.setQueueNames("test.no.listener.yet");
+ return container;
+}
+```
+
+In either case, the listener can also be a `ChannelAwareMessageListener`, since it is now a sub-interface of `MessageListener`.
+
+These techniques are useful if you wish to create several containers with similar properties or use a pre-configured container factory such as the one provided by Spring Boot auto configuration or both.
+
+| |Containers created this way are normal `@Bean` instances and are not registered in the `RabbitListenerEndpointRegistry`.|
+|---|------------------------------------------------------------------------------------------------------------------------|
+
+##### Asynchronous `@RabbitListener` Return Types
+
+Starting with version 2.1, `@RabbitListener` (and `@RabbitHandler`) methods can be specified with asynchronous return types `ListenableFuture>` and `Mono>`, letting the reply be sent asynchronously.
+
+| |The listener container factory must be configured with `AcknowledgeMode.MANUAL` so that the consumer thread will not ack the message; instead, the asynchronous completion will ack or nack the message when the async operation completes. When the async result is completed with an error, whether the message is requeued or not depends on the exception type thrown, the container configuration, and the container error handler. By default, the message will be requeued, unless the container’s `defaultRequeueRejected` property is set to `false` (it is `true` by default). If the async result is completed with an `AmqpRejectAndDontRequeueException`, the message will not be requeued. If the container’s `defaultRequeueRejected` property is `false`, you can override that by setting the future’s exception to a `ImmediateRequeueException` and the message will be requeued. If some exception occurs within the listener method that prevents creation of the async result object, you MUST catch that exception and return an appropriate return object that will cause the message to be acknowledged or requeued.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Starting with versions 2.2.21, 2.3.13, 2.4.1, the `AcknowledgeMode` will be automatically set the `MANUAL` when async return types are detected.
+In addition, incoming messages with fatal exceptions will be negatively acknowledged individually, previously any prior unacknowledged message were also negatively acknowledged.
+
+##### Threading and Asynchronous Consumers
+
+A number of different threads are involved with asynchronous consumers.
+
+Threads from the `TaskExecutor` configured in the `SimpleMessageListenerContainer` are used to invoke the `MessageListener` when a new message is delivered by `RabbitMQ Client`.
+If not configured, a `SimpleAsyncTaskExecutor` is used.
+If you use a pooled executor, you need to ensure the pool size is sufficient to handle the configured concurrency.
+With the `DirectMessageListenerContainer`, the `MessageListener` is invoked directly on a `RabbitMQ Client` thread.
+In this case, the `taskExecutor` is used for the task that monitors the consumers.
+
+| |When using the default `SimpleAsyncTaskExecutor`, for the threads the listener is invoked on, the listener container `beanName` is used in the `threadNamePrefix`. This is useful for log analysis. We generally recommend always including the thread name in the logging appender configuration. When a `TaskExecutor` is specifically provided through the `taskExecutor` property on the container, it is used as is, without modification. It is recommended that you use a similar technique to name the threads created by a custom `TaskExecutor` bean definition, to aid with thread identification in log messages.|
+|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The `Executor` configured in the `CachingConnectionFactory` is passed into the `RabbitMQ Client` when creating the connection, and its threads are used to deliver new messages to the listener container.
+If this is not configured, the client uses an internal thread pool executor with (at the time of writing) a pool size of `Runtime.getRuntime().availableProcessors() * 2` for each connection.
+
+If you have a large number of factories or are using `CacheMode.CONNECTION`, you may wish to consider using a shared `ThreadPoolTaskExecutor` with enough threads to satisfy your workload.
+
+| |With the `DirectMessageListenerContainer`, you need to ensure that the connection factory is configured with a task executor that has sufficient threads to support your desired concurrency across all listener containers that use that factory. The default pool size (at the time of writing) is `Runtime.getRuntime().availableProcessors() * 2`.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The `RabbitMQ client` uses a `ThreadFactory` to create threads for low-level I/O (socket) operations.
+To modify this factory, you need to configure the underlying RabbitMQ `ConnectionFactory`, as discussed in [Configuring the Underlying Client Connection Factory](#connection-factory).
+
+##### Choosing a Container
+
+Version 2.0 introduced the `DirectMessageListenerContainer` (DMLC).
+Previously, only the `SimpleMessageListenerContainer` (SMLC) was available.
+The SMLC uses an internal queue and a dedicated thread for each consumer.
+If a container is configured to listen to multiple queues, the same consumer thread is used to process all the queues.
+Concurrency is controlled by `concurrentConsumers` and other properties.
+As messages arrive from the RabbitMQ client, the client thread hands them off to the consumer thread through the queue.
+This architecture was required because, in early versions of the RabbitMQ client, multiple concurrent deliveries were not possible.
+Newer versions of the client have a revised threading model and can now support concurrency.
+This has allowed the introduction of the DMLC where the listener is now invoked directly on the RabbitMQ Client thread.
+Its architecture is, therefore, actually “simpler” than the SMLC.
+However, there are some limitations with this approach, and certain features of the SMLC are not available with the DMLC.
+Also, concurrency is controlled by `consumersPerQueue` (and the client library’s thread pool).
+The `concurrentConsumers` and associated properties are not available with this container.
+
+The following features are available with the SMLC but not the DMLC:
+
+* `batchSize`: With the SMLC, you can set this to control how many messages are delivered in a transaction or to reduce the number of acks, but it may cause the number of duplicate deliveries to increase after a failure.
+ (The DMLC does have `messagesPerAck`, which you can use to reduce the acks, the same as with `batchSize` and the SMLC, but it cannot be used with transactions — each message is delivered and ack’d in a separate transaction).
+
+* `consumerBatchEnabled`: enables batching of discrete messages in the consumer; see [Message Listener Container Configuration](#containerAttributes) for more information.
+
+* `maxConcurrentConsumers` and consumer scaling intervals or triggers — there is no auto-scaling in the DMLC.
+ It does, however, let you programmatically change the `consumersPerQueue` property and the consumers are adjusted accordingly.
+
+However, the DMLC has the following benefits over the SMLC:
+
+* Adding and removing queues at runtime is more efficient.
+ With the SMLC, the entire consumer thread is restarted (all consumers canceled and re-created).
+ With the DMLC, unaffected consumers are not canceled.
+
+* The context switch between the RabbitMQ Client thread and the consumer thread is avoided.
+
+* Threads are shared across consumers rather than having a dedicated thread for each consumer in the SMLC.
+ However, see the IMPORTANT note about the connection factory configuration in [Threading and Asynchronous Consumers](#threading).
+
+See [Message Listener Container Configuration](#containerAttributes) for information about which configuration properties apply to each container.
+
+##### Detecting Idle Asynchronous Consumers
+
+While efficient, one problem with asynchronous consumers is detecting when they are idle — users might want to take
+some action if no messages arrive for some period of time.
+
+Starting with version 1.6, it is now possible to configure the listener container to publish a`ListenerContainerIdleEvent` when some time passes with no message delivery.
+While the container is idle, an event is published every `idleEventInterval` milliseconds.
+
+To configure this feature, set `idleEventInterval` on the container.
+The following example shows how to do so in XML and in Java (for both a `SimpleMessageListenerContainer` and a `SimpleRabbitListenerContainerFactory`):
+
+```
+
+
+
+```
+
+```
+@Bean
+public SimpleMessageListenerContainer(ConnectionFactory connectionFactory) {
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
+ ...
+ container.setIdleEventInterval(60000L);
+ ...
+ return container;
+}
+```
+
+```
+@Bean
+public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(rabbitConnectionFactory());
+ factory.setIdleEventInterval(60000L);
+ ...
+ return factory;
+}
+```
+
+In each of these cases, an event is published once per minute while the container is idle.
+
+###### Event Consumption
+
+You can capture idle events by implementing `ApplicationListener` — either a general listener, or one narrowed to only
+receive this specific event.
+You can also use `@EventListener`, introduced in Spring Framework 4.2.
+
+The following example combines the `@RabbitListener` and `@EventListener` into a single class.
+You need to understand that the application listener gets events for all containers, so you may need to
+check the listener ID if you want to take specific action based on which container is idle.
+You can also use the `@EventListener` `condition` for this purpose.
+
+The events have four properties:
+
+* `source`: The listener container instance
+
+* `id`: The listener ID (or container bean name)
+
+* `idleTime`: The time the container had been idle when the event was published
+
+* `queueNames`: The names of the queue(s) that the container listens to
+
+The following example shows how to create listeners by using both the `@RabbitListener` and the `@EventListener` annotations:
+
+```
+public class Listener {
+
+ @RabbitListener(id="someId", queues="#{queue.name}")
+ public String listen(String foo) {
+ return foo.toUpperCase();
+ }
+
+ @EventListener(condition = "event.listenerId == 'someId'")
+ public void onApplicationEvent(ListenerContainerIdleEvent event) {
+ ...
+ }
+
+}
+```
+
+| |Event listeners see events for all containers. Consequently, in the preceding example, we narrow the events received based on the listener ID.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |If you wish to use the idle event to stop the lister container, you should not call `container.stop()` on the thread that calls the listener. Doing so always causes delays and unnecessary log messages. Instead, you should hand off the event to a different thread that can then stop the container.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### Monitoring Listener Performance
+
+Starting with version 2.2, the listener containers will automatically create and update Micrometer `Timer` s for the listener, if `Micrometer` is detected on the class path, and a `MeterRegistry` is present in the application context.
+The timers can be disabled by setting the container property `micrometerEnabled` to `false`.
+
+Two timers are maintained - one for successful calls to the listener and one for failures.
+With a simple `MessageListener`, there is a pair of timers for each configured queue.
+
+The timers are named `spring.rabbitmq.listener` and have the following tags:
+
+* `listenerId` : (listener id or container bean name)
+
+* `queue` : (the queue name for a simple listener or list of configured queue names when `consumerBatchEnabled` is `true` - because a batch may contain messages from multiple queues)
+
+* `result` : `success` or `failure`
+
+* `exception` : `none` or `ListenerExecutionFailedException`
+
+You can add additional tags using the `micrometerTags` container property.
+
+#### 4.1.7. Containers and Broker-Named queues
+
+While it is preferable to use `AnonymousQueue` instances as auto-delete queues, starting with version 2.1, you can use broker named queues with listener containers.
+The following example shows how to do so:
+
+```
+@Bean
+public Queue queue() {
+ return new Queue("", false, true, true);
+}
+
+@Bean
+public SimpleMessageListenerContainer container() {
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(cf());
+ container.setQueues(queue());
+ container.setMessageListener(m -> {
+ ...
+ });
+ container.setMissingQueuesFatal(false);
+ return container;
+}
+```
+
+Notice the empty `String` for the name.
+When the `RabbitAdmin` declares queues, it updates the `Queue.actualName` property with the name returned by the broker.
+You must use `setQueues()` when you configure the container for this to work, so that the container can access the declared name at runtime.
+Just setting the names is insufficient.
+
+| |You cannot add broker-named queues to the containers while they are running.|
+|---|----------------------------------------------------------------------------|
+
+| |When a connection is reset and a new one is established, the new queue gets a new name. Since there is a race condition between the container restarting and the queue being re-declared, it is important to set the container’s `missingQueuesFatal` property to `false`, since the container is likely to initially try to reconnect to the old queue.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### 4.1.8. Message Converters
+
+The `AmqpTemplate` also defines several methods for sending and receiving messages that delegate to a `MessageConverter`.
+The `MessageConverter` provides a single method for each direction: one for converting **to** a `Message` and another for converting **from** a `Message`.
+Notice that, when converting to a `Message`, you can also provide properties in addition to the object.
+The `object` parameter typically corresponds to the Message body.
+The following listing shows the `MessageConverter` interface definition:
+
+```
+public interface MessageConverter {
+
+ Message toMessage(Object object, MessageProperties messageProperties)
+ throws MessageConversionException;
+
+ Object fromMessage(Message message) throws MessageConversionException;
+
+}
+```
+
+The relevant `Message`-sending methods on the `AmqpTemplate` are simpler than the methods we discussed previously, because they do not require the `Message` instance.
+Instead, the `MessageConverter` is responsible for “creating” each `Message` by converting the provided object to the byte array for the `Message` body and then adding any provided `MessageProperties`.
+The following listing shows the definitions of the various methods:
+
+```
+void convertAndSend(Object message) throws AmqpException;
+
+void convertAndSend(String routingKey, Object message) throws AmqpException;
+
+void convertAndSend(String exchange, String routingKey, Object message)
+ throws AmqpException;
+
+void convertAndSend(Object message, MessagePostProcessor messagePostProcessor)
+ throws AmqpException;
+
+void convertAndSend(String routingKey, Object message,
+ MessagePostProcessor messagePostProcessor) throws AmqpException;
+
+void convertAndSend(String exchange, String routingKey, Object message,
+ MessagePostProcessor messagePostProcessor) throws AmqpException;
+```
+
+On the receiving side, there are only two methods: one that accepts the queue name and one that relies on the template’s “queue” property having been set.
+The following listing shows the definitions of the two methods:
+
+```
+Object receiveAndConvert() throws AmqpException;
+
+Object receiveAndConvert(String queueName) throws AmqpException;
+```
+
+| |The `MessageListenerAdapter` mentioned in [Asynchronous Consumer](#async-consumer) also uses a `MessageConverter`.|
+|---|------------------------------------------------------------------------------------------------------------------|
+
+##### `SimpleMessageConverter`
+
+The default implementation of the `MessageConverter` strategy is called `SimpleMessageConverter`.
+This is the converter that is used by an instance of `RabbitTemplate` if you do not explicitly configure an alternative.
+It handles text-based content, serialized Java objects, and byte arrays.
+
+###### Converting From a `Message`
+
+If the content type of the input `Message` begins with "text" (for example,
+"text/plain"), it also checks for the content-encoding property to determine the charset to be used when converting the `Message` body byte array to a Java `String`.
+If no content-encoding property had been set on the input `Message`, it uses the UTF-8 charset by default.
+If you need to override that default setting, you can configure an instance of `SimpleMessageConverter`, set its `defaultCharset` property, and inject that into a `RabbitTemplate` instance.
+
+If the content-type property value of the input `Message` is set to "application/x-java-serialized-object", the `SimpleMessageConverter` tries to deserialize (rehydrate) the byte array into a Java object.
+While that might be useful for simple prototyping, we do not recommend relying on Java serialization, since it leads to tight coupling between the producer and the consumer.
+Of course, it also rules out usage of non-Java systems on either side.
+With AMQP being a wire-level protocol, it would be unfortunate to lose much of that advantage with such restrictions.
+In the next two sections, we explore some alternatives for passing rich domain object content without relying on Java serialization.
+
+For all other content-types, the `SimpleMessageConverter` returns the `Message` body content directly as a byte array.
+
+See [Java Deserialization](#java-deserialization) for important information.
+
+###### Converting To a `Message`
+
+When converting to a `Message` from an arbitrary Java Object, the `SimpleMessageConverter` likewise deals with byte arrays, strings, and serializable instances.
+It converts each of these to bytes (in the case of byte arrays, there is nothing to convert), and it ses the content-type property accordingly.
+If the `Object` to be converted does not match one of those types, the `Message` body is null.
+
+##### `SerializerMessageConverter`
+
+This converter is similar to the `SimpleMessageConverter` except that it can be configured with other Spring Framework`Serializer` and `Deserializer` implementations for `application/x-java-serialized-object` conversions.
+
+See [Java Deserialization](#java-deserialization) for important information.
+
+##### Jackson2JsonMessageConverter
+
+This section covers using the `Jackson2JsonMessageConverter` to convert to and from a `Message`.
+It has the following sections:
+
+* [Converting to a `Message`](#Jackson2JsonMessageConverter-to-message)
+
+* [Converting from a `Message`](#Jackson2JsonMessageConverter-from-message)
+
+###### Converting to a `Message`
+
+As mentioned in the previous section, relying on Java serialization is generally not recommended.
+One rather common alternative that is more flexible and portable across different languages and platforms is JSON
+(JavaScript Object Notation).
+The converter can be configured on any `RabbitTemplate` instance to override its usage of the `SimpleMessageConverter`default.
+The `Jackson2JsonMessageConverter` uses the `com.fasterxml.jackson` 2.x library.
+The following example configures a `Jackson2JsonMessageConverter`:
+
+```
+
+
+
+
+
+
+
+
+
+```
+
+As shown above, `Jackson2JsonMessageConverter` uses a `DefaultClassMapper` by default.
+Type information is added to (and retrieved from) `MessageProperties`.
+If an inbound message does not contain type information in `MessageProperties`, but you know the expected type, you
+can configure a static type by using the `defaultType` property, as the following example shows:
+
+```
+
+
+
+
+
+
+
+```
+
+In addition, you can provide custom mappings from the value in the `*TypeId*` header.
+The following example shows how to do so:
+
+```
+@Bean
+public Jackson2JsonMessageConverter jsonMessageConverter() {
+ Jackson2JsonMessageConverter jsonConverter = new Jackson2JsonMessageConverter();
+ jsonConverter.setClassMapper(classMapper());
+ return jsonConverter;
+}
+
+@Bean
+public DefaultClassMapper classMapper() {
+ DefaultClassMapper classMapper = new DefaultClassMapper();
+ Map> idClassMapping = new HashMap<>();
+ idClassMapping.put("thing1", Thing1.class);
+ idClassMapping.put("thing2", Thing2.class);
+ classMapper.setIdClassMapping(idClassMapping);
+ return classMapper;
+}
+```
+
+Now, if the sending system sets the header to `thing1`, the converter creates a `Thing1` object, and so on.
+See the [Receiving JSON from Non-Spring Applications](#spring-rabbit-json) sample application for a complete discussion about converting messages from non-Spring applications.
+
+###### Converting from a `Message`
+
+Inbound messages are converted to objects according to the type information added to headers by the sending system.
+
+In versions prior to 1.6, if type information is not present, conversion would fail.
+Starting with version 1.6, if type information is missing, the converter converts the JSON by using Jackson defaults (usually a map).
+
+Also, starting with version 1.6, when you use `@RabbitListener` annotations (on methods), the inferred type information is added to the `MessageProperties`.
+This lets the converter convert to the argument type of the target method.
+This only applies if there is one parameter with no annotations or a single parameter with the `@Payload` annotation.
+Parameters of type `Message` are ignored during the analysis.
+
+| |By default, the inferred type information will override the inbound `*TypeId*` and related headers created by the sending system. This lets the receiving system automatically convert to a different domain object. This applies only if the parameter type is concrete (not abstract or an interface) or it is from the `java.util`package. In all other cases, the `*TypeId*` and related headers is used. There are cases where you might wish to override the default behavior and always use the `*TypeId*` information. For example, suppose you have a `@RabbitListener` that takes a `Thing1` argument but the message contains a `Thing2` that is a subclass of `Thing1` (which is concrete). The inferred type would be incorrect. To handle this situation, set the `TypePrecedence` property on the `Jackson2JsonMessageConverter` to `TYPE_ID` instead of the default `INFERRED`. (The property is actually on the converter’s `DefaultJackson2JavaTypeMapper`, but a setter is provided on the converter for convenience.) If you inject a custom type mapper, you should set the property on the mapper instead.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |When converting from the `Message`, an incoming `MessageProperties.getContentType()` must be JSON-compliant (`contentType.contains("json")` is used to check). Starting with version 2.2, `application/json` is assumed if there is no `contentType` property, or it has the default value `application/octet-stream`. To revert to the previous behavior (return an unconverted `byte[]`), set the converter’s `assumeSupportedContentType` property to `false`. If the content type is not supported, a `WARN` log message `Could not convert incoming message with content-type […]`, is emitted and `message.getBody()` is returned as is — as a `byte[]`. So, to meet the `Jackson2JsonMessageConverter` requirements on the consumer side, the producer must add the `contentType` message property — for example, as `application/json` or `text/x-json` or by using the `Jackson2JsonMessageConverter`, which sets the header automatically. The following listing shows a number of converter calls:|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+```
+@RabbitListener
+public void thing1(Thing1 thing1) {...}
+
+@RabbitListener
+public void thing1(@Payload Thing1 thing1, @Header("amqp_consumerQueue") String queue) {...}
+
+@RabbitListener
+public void thing1(Thing1 thing1, o.s.amqp.core.Message message) {...}
+
+@RabbitListener
+public void thing1(Thing1 thing1, o.s.messaging.Message message) {...}
+
+@RabbitListener
+public void thing1(Thing1 thing1, String bar) {...}
+
+@RabbitListener
+public void thing1(Thing1 thing1, o.s.messaging.Message> message) {...}
+```
+
+In the first four cases in the preceding listing, the converter tries to convert to the `Thing1` type.
+The fifth example is invalid because we cannot determine which argument should receive the message payload.
+With the sixth example, the Jackson defaults apply due to the generic type being a `WildcardType`.
+
+You can, however, create a custom converter and use the `targetMethod` message property to decide which type to convert
+the JSON to.
+
+| |This type inference can only be achieved when the `@RabbitListener` annotation is declared at the method level. With class-level `@RabbitListener`, the converted type is used to select which `@RabbitHandler` method to invoke. For this reason, the infrastructure provides the `targetObject` message property, which you can use in a custom converter to determine the type.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |Starting with version 1.6.11, `Jackson2JsonMessageConverter` and, therefore, `DefaultJackson2JavaTypeMapper` (`DefaultClassMapper`) provide the `trustedPackages` option to overcome [Serialization Gadgets](https://pivotal.io/security/cve-2017-4995) vulnerability. By default and for backward compatibility, the `Jackson2JsonMessageConverter` trusts all packages — that is, it uses `*` for the option.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+###### Deserializing Abstract Classes
+
+Prior to version 2.2.8, if the inferred type of a `@RabbitListener` was an abstract class (including interfaces), the converter would fall back to looking for type information in the headers and, if present, used that information; if that was not present, it would try to create the abstract class.
+This caused a problem when a custom `ObjectMapper` that is configured with a custom deserializer to handle the abstract class is used, but the incoming message has invalid type headers.
+
+Starting with version 2.2.8, the previous behavior is retained by default. If you have such a custom `ObjectMapper` and you want to ignore type headers, and always use the inferred type for conversion, set the `alwaysConvertToInferredType` to `true`.
+This is needed for backwards compatibility and to avoid the overhead of an attempted conversion when it would fail (with a standard `ObjectMapper`).
+
+###### Using Spring Data Projection Interfaces
+
+Starting with version 2.2, you can convert JSON to a Spring Data Projection interface instead of a concrete type.
+This allows very selective, and low-coupled bindings to data, including the lookup of values from multiple places inside the JSON document.
+For example the following interface can be defined as message payload type:
+
+```
+interface SomeSample {
+
+ @JsonPath({ "$.username", "$.user.name" })
+ String getUsername();
+
+}
+```
+
+```
+@RabbitListener(queues = "projection")
+public void projection(SomeSample in) {
+ String username = in.getUsername();
+ ...
+}
+```
+
+Accessor methods will be used to lookup the property name as field in the received JSON document by default.
+The `@JsonPath` expression allows customization of the value lookup, and even to define multiple JSON path expressions, to lookup values from multiple places until an expression returns an actual value.
+
+To enable this feature, set the `useProjectionForInterfaces` to `true` on the message converter.
+You must also add `spring-data:spring-data-commons` and `com.jayway.jsonpath:json-path` to the class path.
+
+When used as the parameter to a `@RabbitListener` method, the interface type is automatically passed to the converter as normal.
+
+###### Converting From a `Message` With `RabbitTemplate`
+
+As mentioned earlier, type information is conveyed in message headers to assist the converter when converting from a message.
+This works fine in most cases.
+However, when using generic types, it can only convert simple objects and known “container” objects (lists, arrays, and maps).
+Starting with version 2.0, the `Jackson2JsonMessageConverter` implements `SmartMessageConverter`, which lets it be used with the new `RabbitTemplate` methods that take a `ParameterizedTypeReference` argument.
+This allows conversion of complex generic types, as shown in the following example:
+
+```
+Thing1> thing1 =
+ rabbitTemplate.receiveAndConvert(new ParameterizedTypeReference>>() { });
+```
+
+| |Starting with version 2.1, the `AbstractJsonMessageConverter` class has been removed. It is no longer the base class for `Jackson2JsonMessageConverter`. It has been replaced by `AbstractJackson2MessageConverter`.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### `MarshallingMessageConverter`
+
+Yet another option is the `MarshallingMessageConverter`.
+It delegates to the Spring OXM library’s implementations of the `Marshaller` and `Unmarshaller` strategy interfaces.
+You can read more about that library [here](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/oxm.html).
+In terms of configuration, it is most common to provide only the constructor argument, since most implementations of `Marshaller` also implement `Unmarshaller`.
+The following example shows how to configure a `MarshallingMessageConverter`:
+
+```
+
+
+
+
+
+
+
+
+```
+
+##### `Jackson2XmlMessageConverter`
+
+This class was introduced in version 2.1 and can be used to convert messages from and to XML.
+
+Both `Jackson2XmlMessageConverter` and `Jackson2JsonMessageConverter` have the same base class: `AbstractJackson2MessageConverter`.
+
+| |The `AbstractJackson2MessageConverter` class is introduced to replace a removed class: `AbstractJsonMessageConverter`.|
+|---|----------------------------------------------------------------------------------------------------------------------|
+
+The `Jackson2XmlMessageConverter` uses the `com.fasterxml.jackson` 2.x library.
+
+You can use it the same way as `Jackson2JsonMessageConverter`, except it supports XML instead of JSON.
+The following example configures a `Jackson2JsonMessageConverter`:
+
+```
+
+
+
+
+
+
+
+```
+
+See [Jackson2JsonMessageConverter](#json-message-converter) for more information.
+
+| |Starting with version 2.2, `application/xml` is assumed if there is no `contentType` property, or it has the default value `application/octet-stream`. To revert to the previous behavior (return an unconverted `byte[]`), set the converter’s `assumeSupportedContentType` property to `false`.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### `ContentTypeDelegatingMessageConverter`
+
+This class was introduced in version 1.4.2 and allows delegation to a specific `MessageConverter` based on the content type property in the `MessageProperties`.
+By default, it delegates to a `SimpleMessageConverter` if there is no `contentType` property or there is a value that matches none of the configured converters.
+The following example configures a `ContentTypeDelegatingMessageConverter`:
+
+```
+
+
+
+
+
+
+
+
+```
+
+##### Java Deserialization
+
+This section covers how to deserialize Java objects.
+
+| |There is a possible vulnerability when deserializing java objects from untrusted sources. If you accept messages from untrusted sources with a `content-type` of `application/x-java-serialized-object`, you should consider configuring which packages and classes are allowed to be deserialized. This applies to both the `SimpleMessageConverter` and `SerializerMessageConverter` when it is configured to use a`DefaultDeserializer` either implicitly or via configuration. By default, the allowed list is empty, meaning all classes are deserialized. You can set a list of patterns, such as `thing1.`**, `thing1.thing2.Cat` or ``**`.MySafeClass`. The patterns are checked in order until a match is found. If there is no match, a `SecurityException` is thrown. You can set the patterns using the `allowedListPatterns` property on these converters.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### Message Properties Converters
+
+The `MessagePropertiesConverter` strategy interface is used to convert between the Rabbit Client `BasicProperties` and Spring AMQP `MessageProperties`.
+The default implementation (`DefaultMessagePropertiesConverter`) is usually sufficient for most purposes, but you can implement your own if needed.
+The default properties converter converts `BasicProperties` elements of type `LongString` to `String` instances when the size is not greater than `1024` bytes.
+Larger `LongString` instances are not converted (see the next paragraph).
+This limit can be overridden with a constructor argument.
+
+Starting with version 1.6, headers longer than the long string limit (default: 1024) are now left as`LongString` instances by default by the `DefaultMessagePropertiesConverter`.
+You can access the contents through the `getBytes[]`, `toString()`, or `getStream()` methods.
+
+Previously, the `DefaultMessagePropertiesConverter` “converted” such headers to a `DataInputStream` (actually it just referenced the `LongString` instance’s `DataInputStream`).
+On output, this header was not converted (except to a String — for example, `[[email protected]](/cdn-cgi/l/email-protection)` by calling `toString()` on the stream).
+
+Large incoming `LongString` headers are now correctly “converted” on output, too (by default).
+
+A new constructor is provided to let you configure the converter to work as before.
+The following listing shows the Javadoc comment and declaration of the method:
+
+```
+/**
+ * Construct an instance where LongStrings will be returned
+ * unconverted or as a java.io.DataInputStream when longer than this limit.
+ * Use this constructor with 'true' to restore pre-1.6 behavior.
+ * @param longStringLimit the limit.
+ * @param convertLongLongStrings LongString when false,
+ * DataInputStream when true.
+ * @since 1.6
+ */
+public DefaultMessagePropertiesConverter(int longStringLimit, boolean convertLongLongStrings) { ... }
+```
+
+Also starting with version 1.6, a new property called `correlationIdString` has been added to `MessageProperties`.
+Previously, when converting to and from `BasicProperties` used by the RabbitMQ client, an unnecessary `byte[] <→ String` conversion was performed because `MessageProperties.correlationId` is a `byte[]`, but `BasicProperties` uses a `String`.
+(Ultimately, the RabbitMQ client uses UTF-8 to convert the `String` to bytes to put in the protocol message).
+
+To provide maximum backwards compatibility, a new property called `correlationIdPolicy` has been added to the`DefaultMessagePropertiesConverter`.
+This takes a `DefaultMessagePropertiesConverter.CorrelationIdPolicy` enum argument.
+By default it is set to `BYTES`, which replicates the previous behavior.
+
+For inbound messages:
+
+* `STRING`: Only the `correlationIdString` property is mapped
+
+* `BYTES`: Only the `correlationId` property is mapped
+
+* `BOTH`: Both properties are mapped
+
+For outbound messages:
+
+* `STRING`: Only the `correlationIdString` property is mapped
+
+* `BYTES`: Only the `correlationId` property is mapped
+
+* `BOTH`: Both properties are considered, with the `String` property taking precedence
+
+Also starting with version 1.6, the inbound `deliveryMode` property is no longer mapped to `MessageProperties.deliveryMode`.
+It is mapped to `MessageProperties.receivedDeliveryMode` instead.
+Also, the inbound `userId` property is no longer mapped to `MessageProperties.userId`.
+It is mapped to `MessageProperties.receivedUserId` instead.
+These changes are to avoid unexpected propagation of these properties if the same `MessageProperties` object is used for an outbound message.
+
+Starting with version 2.2, the `DefaultMessagePropertiesConverter` converts any custom headers with values of type `Class>` using `getName()` instead of `toString()`; this avoids consuming application having to parse the class name out of the `toString()` representation.
+For rolling upgrades, you may need to change your consumers to understand both formats until all producers are upgraded.
+
+#### 4.1.9. Modifying Messages - Compression and More
+
+A number of extension points exist.
+They let you perform some processing on a message, either before it is sent to RabbitMQ or immediately after it is received.
+
+As can be seen in [Message Converters](#message-converters), one such extension point is in the `AmqpTemplate` `convertAndReceive` operations, where you can provide a `MessagePostProcessor`.
+For example, after your POJO has been converted, the `MessagePostProcessor` lets you set custom headers or properties on the `Message`.
+
+Starting with version 1.4.2, additional extension points have been added to the `RabbitTemplate` - `setBeforePublishPostProcessors()` and `setAfterReceivePostProcessors()`.
+The first enables a post processor to run immediately before sending to RabbitMQ.
+When using batching (see [Batching](#template-batching)), this is invoked after the batch is assembled and before the batch is sent.
+The second is invoked immediately after a message is received.
+
+These extension points are used for such features as compression and, for this purpose, several `MessagePostProcessor` implementations are provided.`GZipPostProcessor`, `ZipPostProcessor` and `DeflaterPostProcessor` compress messages before sending, and `GUnzipPostProcessor`, `UnzipPostProcessor` and `InflaterPostProcessor` decompress received messages.
+
+| |Starting with version 2.1.5, the `GZipPostProcessor` can be configured with the `copyProperties = true` option to make a copy of the original message properties. By default, these properties are reused for performance reasons, and modified with compression content encoding and the optional `MessageProperties.SPRING_AUTO_DECOMPRESS` header. If you retain a reference to the original outbound message, its properties will change as well. So, if your application retains a copy of an outbound message with these message post processors, consider turning the `copyProperties` option on.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |Starting with version 2.2.12, you can configure the delimiter that the compressing post processors use between content encoding elements. With versions 2.2.11 and before, this was hard-coded as `:`, it is now set to `, ` by default. The decompressors will work with both delimiters. However, if you publish messages with 2.3 or later and consume with 2.2.11 or earlier, you MUST set the `encodingDelimiter` property on the compressor(s) to `:`. When your consumers are upgraded to 2.2.11 or later, you can revert to the default of `, `.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Similarly, the `SimpleMessageListenerContainer` also has a `setAfterReceivePostProcessors()` method, letting the decompression be performed after messages are received by the container.
+
+Starting with version 2.1.4, `addBeforePublishPostProcessors()` and `addAfterReceivePostProcessors()` have been added to the `RabbitTemplate` to allow appending new post processors to the list of before publish and after receive post processors respectively.
+Also there are methods provided to remove the post processors.
+Similarly, `AbstractMessageListenerContainer` also has `addAfterReceivePostProcessors()` and `removeAfterReceivePostProcessor()` methods added.
+See the Javadoc of `RabbitTemplate` and `AbstractMessageListenerContainer` for more detail.
+
+#### 4.1.10. Request/Reply Messaging
+
+The `AmqpTemplate` also provides a variety of `sendAndReceive` methods that accept the same argument options that were described earlier for the one-way send operations (`exchange`, `routingKey`, and `Message`).
+Those methods are quite useful for request-reply scenarios, since they handle the configuration of the necessary `reply-to` property before sending and can listen for the reply message on an exclusive queue that is created internally for that purpose.
+
+Similar request-reply methods are also available where the `MessageConverter` is applied to both the request and reply.
+Those methods are named `convertSendAndReceive`.
+See the [Javadoc of `AmqpTemplate`](https://docs.spring.io/spring-amqp/docs/latest-ga/api/org/springframework/amqp/core/AmqpTemplate.html) for more detail.
+
+Starting with version 1.5.0, each of the `sendAndReceive` method variants has an overloaded version that takes `CorrelationData`.
+Together with a properly configured connection factory, this enables the receipt of publisher confirms for the send side of the operation.
+See [Correlated Publisher Confirms and Returns](#template-confirms) and the [Javadoc for `RabbitOperations`](https://docs.spring.io/spring-amqp/docs/latest-ga/api/org/springframework/amqp/rabbit/core/RabbitOperations.html) for more information.
+
+Starting with version 2.0, there are variants of these methods (`convertSendAndReceiveAsType`) that take an additional `ParameterizedTypeReference` argument to convert complex returned types.
+The template must be configured with a `SmartMessageConverter`.
+See [Converting From a `Message` With `RabbitTemplate`](#json-complex) for more information.
+
+Starting with version 2.1, you can configure the `RabbitTemplate` with the `noLocalReplyConsumer` option to control a `noLocal` flag for reply consumers.
+This is `false` by default.
+
+##### Reply Timeout
+
+By default, the send and receive methods timeout after five seconds and return null.
+You can modify this behavior by setting the `replyTimeout` property.
+Starting with version 1.5, if you set the `mandatory` property to `true` (or the `mandatory-expression` evaluates to `true` for a particular message), if the message cannot be delivered to a queue, an `AmqpMessageReturnedException` is thrown.
+This exception has `returnedMessage`, `replyCode`, and `replyText` properties, as well as the `exchange` and `routingKey` used for the send.
+
+| |This feature uses publisher returns. You can enable it by setting `publisherReturns` to `true` on the `CachingConnectionFactory` (see [Publisher Confirms and Returns](#cf-pub-conf-ret)). Also, you must not have registered your own `ReturnCallback` with the `RabbitTemplate`.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Starting with version 2.1.2, a `replyTimedOut` method has been added, letting subclasses be informed of the timeout so that they can clean up any retained state.
+
+Starting with versions 2.0.11 and 2.1.3, when you use the default `DirectReplyToMessageListenerContainer`, you can add an error handler by setting the template’s `replyErrorHandler` property.
+This error handler is invoked for any failed deliveries, such as late replies and messages received without a correlation header.
+The exception passed in is a `ListenerExecutionFailedException`, which has a `failedMessage` property.
+
+##### RabbitMQ Direct reply-to
+
+| |Starting with version 3.4.0, the RabbitMQ server supports [direct reply-to](https://www.rabbitmq.com/direct-reply-to.html). This eliminates the main reason for a fixed reply queue (to avoid the need to create a temporary queue for each request). Starting with Spring AMQP version 1.4.1 direct reply-to is used by default (if supported by the server) instead of creating temporary reply queues. When no `replyQueue` is provided (or it is set with a name of `amq.rabbitmq.reply-to`), the `RabbitTemplate` automatically detects whether direct reply-to is supported and either uses it or falls back to using a temporary reply queue. When using direct reply-to, a `reply-listener` is not required and should not be configured.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Reply listeners are still supported with named queues (other than `amq.rabbitmq.reply-to`), allowing control of reply concurrency and so on.
+
+Starting with version 1.6, if you wish to use a temporary, exclusive, auto-delete queue for each
+reply, set the `useTemporaryReplyQueues` property to `true`.
+This property is ignored if you set a `replyAddress`.
+
+You can change the criteria that dictate whether to use direct reply-to by subclassing `RabbitTemplate` and overriding `useDirectReplyTo()` to check different criteria.
+The method is called once only, when the first request is sent.
+
+Prior to version 2.0, the `RabbitTemplate` created a new consumer for each request and canceled the consumer when the reply was received (or timed out).
+Now the template uses a `DirectReplyToMessageListenerContainer` instead, letting the consumers be reused.
+The template still takes care of correlating the replies, so there is no danger of a late reply going to a different sender.
+If you want to revert to the previous behavior, set the `useDirectReplyToContainer` (`direct-reply-to-container` when using XML configuration) property to false.
+
+The `AsyncRabbitTemplate` has no such option.
+It always used a `DirectReplyToContainer` for replies when direct reply-to is used.
+
+Starting with version 2.3.7, the template has a new property `useChannelForCorrelation`.
+When this is `true`, the server does not have to copy the correlation id from the request message headers to the reply message.
+Instead, the channel used to send the request is used to correlate the reply to the request.
+
+##### Message Correlation With A Reply Queue
+
+When using a fixed reply queue (other than `amq.rabbitmq.reply-to`), you must provide correlation data so that replies can be correlated to requests.
+See [RabbitMQ Remote Procedure Call (RPC)](https://www.rabbitmq.com/tutorials/tutorial-six-java.html).
+By default, the standard `correlationId` property is used to hold the correlation data.
+However, if you wish to use a custom property to hold correlation data, you can set the `correlation-key` attribute on the \.
+Explicitly setting the attribute to `correlationId` is the same as omitting the attribute.
+The client and server must use the same header for correlation data.
+
+| |Spring AMQP version 1.1 used a custom property called `spring_reply_correlation` for this data. If you wish to revert to this behavior with the current version (perhaps to maintain compatibility with another application using 1.1), you must set the attribute to `spring_reply_correlation`.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+By default, the template generates its own correlation ID (ignoring any user-supplied value).
+If you wish to use your own correlation ID, set the `RabbitTemplate` instance’s `userCorrelationId` property to `true`.
+
+| |The correlation ID must be unique to avoid the possibility of a wrong reply being returned for a request.|
+|---|---------------------------------------------------------------------------------------------------------|
+
+##### Reply Listener Container
+
+When using RabbitMQ versions prior to 3.4.0, a new temporary queue is used for each reply.
+However, a single reply queue can be configured on the template, which can be more efficient and also lets you set arguments on that queue.
+In this case, however, you must also provide a \ sub element.
+This element provides a listener container for the reply queue, with the template being the listener.
+All of the [Message Listener Container Configuration](#containerAttributes) attributes allowed on a \ are allowed on the element, except for `connection-factory` and `message-converter`, which are inherited from the template’s configuration.
+
+| |If you run multiple instances of your application or use multiple `RabbitTemplate` instances, you **MUST** use a unique reply queue for each. RabbitMQ has no ability to select messages from a queue, so, if they all use the same queue, each instance would compete for replies and not necessarily receive their own.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The following example defines a rabbit template with a connection factory:
+
+```
+
+
+
+```
+
+While the container and template share a connection factory, they do not share a channel.
+Therefore, requests and replies are not performed within the same transaction (if transactional).
+
+| |Prior to version 1.5.0, the `reply-address` attribute was not available. Replies were always routed by using the default exchange and the `reply-queue` name as the routing key. This is still the default, but you can now specify the new `reply-address` attribute. The `reply-address` can contain an address with the form `/` and the reply is routed to the specified exchange and routed to a queue bound with the routing key. The `reply-address` has precedence over `reply-queue`. When only `reply-address` is in use, the `` must be configured as a separate `` component. The `reply-address` and `reply-queue` (or `queues` attribute on the ``) must refer to the same queue logically.|
+|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+With this configuration, a `SimpleListenerContainer` is used to receive the replies, with the `RabbitTemplate` being the `MessageListener`.
+When defining a template with the ` ` namespace element, as shown in the preceding example, the parser defines the container and wires in the template as the listener.
+
+| |When the template does not use a fixed `replyQueue` (or is using direct reply-to — see [RabbitMQ Direct reply-to](#direct-reply-to)), a listener container is not needed. Direct `reply-to` is the preferred mechanism when using RabbitMQ 3.4.0 or later.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+If you define your `RabbitTemplate` as a ` ` or use an `@Configuration` class to define it as an `@Bean` or when you create the template programmatically, you need to define and wire up the reply listener container yourself.
+If you fail to do this, the template never receives the replies and eventually times out and returns null as the reply to a call to a `sendAndReceive` method.
+
+Starting with version 1.5, the `RabbitTemplate` detects if it has been
+configured as a `MessageListener` to receive replies.
+If not, attempts to send and receive messages with a reply address
+fail with an `IllegalStateException` (because the replies are never received).
+
+Further, if a simple `replyAddress` (queue name) is used, the reply listener container verifies that it is listening
+to a queue with the same name.
+This check cannot be performed if the reply address is an exchange and routing key and a debug log message is written.
+
+| |When wiring the reply listener and template yourself, it is important to ensure that the template’s `replyAddress` and the container’s `queues` (or `queueNames`) properties refer to the same queue. The template inserts the reply address into the outbound message `replyTo` property.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The following listing shows examples of how to manually wire up the beans:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+```
+ @Bean
+ public RabbitTemplate amqpTemplate() {
+ RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
+ rabbitTemplate.setMessageConverter(msgConv());
+ rabbitTemplate.setReplyAddress(replyQueue().getName());
+ rabbitTemplate.setReplyTimeout(60000);
+ rabbitTemplate.setUseDirectReplyToContainer(false);
+ return rabbitTemplate;
+ }
+
+ @Bean
+ public SimpleMessageListenerContainer replyListenerContainer() {
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
+ container.setConnectionFactory(connectionFactory());
+ container.setQueues(replyQueue());
+ container.setMessageListener(amqpTemplate());
+ return container;
+ }
+
+ @Bean
+ public Queue replyQueue() {
+ return new Queue("my.reply.queue");
+ }
+```
+
+A complete example of a `RabbitTemplate` wired with a fixed reply queue, together with a “remote” listener container that handles the request and returns the reply is shown in [this test case](https://github.com/spring-projects/spring-amqp/tree/main/spring-rabbit/src/test/java/org/springframework/amqp/rabbit/listener/JavaConfigFixedReplyQueueTests.java).
+
+| |When the reply times out (`replyTimeout`), the `sendAndReceive()` methods return null.|
+|---|--------------------------------------------------------------------------------------|
+
+Prior to version 1.3.6, late replies for timed out messages were only logged.
+Now, if a late reply is received, it is rejected (the template throws an `AmqpRejectAndDontRequeueException`).
+If the reply queue is configured to send rejected messages to a dead letter exchange, the reply can be retrieved for later analysis.
+To do so, bind a queue to the configured dead letter exchange with a routing key equal to the reply queue’s name.
+
+See the [RabbitMQ Dead Letter Documentation](https://www.rabbitmq.com/dlx.html) for more information about configuring dead lettering.
+You can also take a look at the `FixedReplyQueueDeadLetterTests` test case for an example.
+
+##### Async Rabbit Template
+
+Version 1.6 introduced the `AsyncRabbitTemplate`.
+This has similar `sendAndReceive` (and `convertSendAndReceive`) methods to those on the [`AmqpTemplate`](#amqp-template).
+However, instead of blocking, they return a `ListenableFuture`.
+
+The `sendAndReceive` methods return a `RabbitMessageFuture`.
+The `convertSendAndReceive` methods return a `RabbitConverterFuture`.
+
+You can either synchronously retrieve the result later, by invoking `get()` on the future, or you can register a callback that is called asynchronously with the result.
+The following listing shows both approaches:
+
+```
+@Autowired
+private AsyncRabbitTemplate template;
+
+...
+
+public void doSomeWorkAndGetResultLater() {
+
+ ...
+
+ ListenableFuture future = this.template.convertSendAndReceive("foo");
+
+ // do some more work
+
+ String reply = null;
+ try {
+ reply = future.get();
+ }
+ catch (ExecutionException e) {
+ ...
+ }
+
+ ...
+
+}
+
+public void doSomeWorkAndGetResultAsync() {
+
+ ...
+
+ RabbitConverterFuture future = this.template.convertSendAndReceive("foo");
+ future.addCallback(new ListenableFutureCallback() {
+
+ @Override
+ public void onSuccess(String result) {
+ ...
+ }
+
+ @Override
+ public void onFailure(Throwable ex) {
+ ...
+ }
+
+ });
+
+ ...
+
+}
+```
+
+If `mandatory` is set and the message cannot be delivered, the future throws an `ExecutionException` with a cause of `AmqpMessageReturnedException`, which encapsulates the returned message and information about the return.
+
+If `enableConfirms` is set, the future has a property called `confirm`, which is itself a `ListenableFuture` with `true` indicating a successful publish.
+If the confirm future is `false`, the `RabbitFuture` has a further property called `nackCause`, which contains the reason for the failure, if available.
+
+| |The publisher confirm is discarded if it is received after the reply, since the reply implies a successful publish.|
+|---|-------------------------------------------------------------------------------------------------------------------|
+
+You can set the `receiveTimeout` property on the template to time out replies (it defaults to `30000` - 30 seconds).
+If a timeout occurs, the future is completed with an `AmqpReplyTimeoutException`.
+
+The template implements `SmartLifecycle`.
+Stopping the template while there are pending replies causes the pending `Future` instances to be canceled.
+
+Starting with version 2.0, the asynchronous template now supports [direct reply-to](https://www.rabbitmq.com/direct-reply-to.html) instead of a configured reply queue.
+To enable this feature, use one of the following constructors:
+
+```
+public AsyncRabbitTemplate(ConnectionFactory connectionFactory, String exchange, String routingKey)
+
+public AsyncRabbitTemplate(RabbitTemplate template)
+```
+
+See [RabbitMQ Direct reply-to](#direct-reply-to) to use direct reply-to with the synchronous `RabbitTemplate`.
+
+Version 2.0 introduced variants of these methods (`convertSendAndReceiveAsType`) that take an additional `ParameterizedTypeReference` argument to convert complex returned types.
+You must configure the underlying `RabbitTemplate` with a `SmartMessageConverter`.
+See [Converting From a `Message` With `RabbitTemplate`](#json-complex) for more information.
+
+##### Spring Remoting with AMQP
+
+| |This feature is deprecated and will be removed in 3.0. It has been superseded for a long time by [Handling Exceptions](#annotation-error-handling) with the `returnExceptions` being set to true, and configuring a `RemoteInvocationAwareMessageConverterAdapter` on the sending side. See [Handling Exceptions](#annotation-error-handling) for more information.|
+|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The Spring Framework has a general remoting capability, allowing [Remote Procedure Calls (RPC)](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/remoting.html) that use various transports.
+Spring-AMQP supports a similar mechanism with a `AmqpProxyFactoryBean` on the client and a `AmqpInvokerServiceExporter` on the server.
+This provides RPC over AMQP.
+On the client side, a `RabbitTemplate` is used as described [earlier](#reply-listener).
+On the server side, the invoker (configured as a `MessageListener`) receives the message, invokes the configured service, and returns the reply by using the inbound message’s `replyTo` information.
+
+You can inject the client factory bean into any bean (by using its `serviceInterface`).
+The client can then invoke methods on the proxy, resulting in remote execution over AMQP.
+
+| |With the default `MessageConverter` instances, the method parameters and returned value must be instances of `Serializable`.|
+|---|----------------------------------------------------------------------------------------------------------------------------|
+
+On the server side, the `AmqpInvokerServiceExporter` has both `AmqpTemplate` and `MessageConverter` properties.
+Currently, the template’s `MessageConverter` is not used.
+If you need to supply a custom message converter, you should provide it by setting the `messageConverter` property.
+On the client side, you can add a custom message converter to the `AmqpTemplate`, which is provided to the `AmqpProxyFactoryBean` by using its `amqpTemplate` property.
+
+The following listing shows sample client and server configurations:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+| |The `AmqpInvokerServiceExporter` can process only properly formed messages, such as those sent from the `AmqpProxyFactoryBean`. If it receives a message that it cannot interpret, a serialized `RuntimeException` is sent as a reply. If the message has no `replyToAddress` property, the message is rejected and permanently lost if no dead letter exchange has been configured.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |By default, if the request message cannot be delivered, the calling thread eventually times out and a `RemoteProxyFailureException` is thrown. By default, the timeout is five seconds. You can modify that duration by setting the `replyTimeout` property on the `RabbitTemplate`. Starting with version 1.5, by setting the `mandatory` property to `true` and enabling returns on the connection factory (see [Publisher Confirms and Returns](#cf-pub-conf-ret)), the calling thread throws an `AmqpMessageReturnedException`. See [Reply Timeout](#reply-timeout) for more information.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### 4.1.11. Configuring the Broker
+
+The AMQP specification describes how the protocol can be used to configure queues, exchanges, and bindings on the broker.
+These operations (which are portable from the 0.8 specification and higher) are present in the `AmqpAdmin` interface in the `org.springframework.amqp.core` package.
+The RabbitMQ implementation of that class is `RabbitAdmin` located in the `org.springframework.amqp.rabbit.core` package.
+
+The `AmqpAdmin` interface is based on using the Spring AMQP domain abstractions and is shown in the following listing:
+
+```
+public interface AmqpAdmin {
+
+ // Exchange Operations
+
+ void declareExchange(Exchange exchange);
+
+ void deleteExchange(String exchangeName);
+
+ // Queue Operations
+
+ Queue declareQueue();
+
+ String declareQueue(Queue queue);
+
+ void deleteQueue(String queueName);
+
+ void deleteQueue(String queueName, boolean unused, boolean empty);
+
+ void purgeQueue(String queueName, boolean noWait);
+
+ // Binding Operations
+
+ void declareBinding(Binding binding);
+
+ void removeBinding(Binding binding);
+
+ Properties getQueueProperties(String queueName);
+
+}
+```
+
+See also [Scoped Operations](#scoped-operations).
+
+The `getQueueProperties()` method returns some limited information about the queue (message count and consumer count).
+The keys for the properties returned are available as constants in the `RabbitTemplate` (`QUEUE_NAME`,`QUEUE_MESSAGE_COUNT`, and `QUEUE_CONSUMER_COUNT`).
+The [RabbitMQ REST API](#management-rest-api) provides much more information in the `QueueInfo` object.
+
+The no-arg `declareQueue()` method defines a queue on the broker with a name that is automatically generated.
+The additional properties of this auto-generated queue are `exclusive=true`, `autoDelete=true`, and `durable=false`.
+
+The `declareQueue(Queue queue)` method takes a `Queue` object and returns the name of the declared queue.
+If the `name` property of the provided `Queue` is an empty `String`, the broker declares the queue with a generated name.
+That name is returned to the caller.
+That name is also added to the `actualName` property of the `Queue`.
+You can use this functionality programmatically only by invoking the `RabbitAdmin` directly.
+When using auto-declaration by the admin when defining a queue declaratively in the application context, you can set the name property to `""` (the empty string).
+The broker then creates the name.
+Starting with version 2.1, listener containers can use queues of this type.
+See [Containers and Broker-Named queues](#containers-and-broker-named-queues) for more information.
+
+This is in contrast to an `AnonymousQueue` where the framework generates a unique (`UUID`) name and sets `durable` to`false` and `exclusive`, `autoDelete` to `true`.
+A ` ` with an empty (or missing) `name` attribute always creates an `AnonymousQueue`.
+
+See [`AnonymousQueue`](#anonymous-queue) to understand why `AnonymousQueue` is preferred over broker-generated queue names as well as
+how to control the format of the name.
+Starting with version 2.1, anonymous queues are declared with argument `Queue.X_QUEUE_LEADER_LOCATOR` set to `client-local` by default.
+This ensures that the queue is declared on the node to which the application is connected.
+Declarative queues must have fixed names because they might be referenced elsewhere in the context — such as in the
+listener shown in the following example:
+
+```
+
+
+
+```
+
+See [Automatic Declaration of Exchanges, Queues, and Bindings](#automatic-declaration).
+
+The RabbitMQ implementation of this interface is `RabbitAdmin`, which, when configured by using Spring XML, resembles the following example:
+
+```
+
+
+
+```
+
+When the `CachingConnectionFactory` cache mode is `CHANNEL` (the default), the `RabbitAdmin` implementation does automatic lazy declaration of queues, exchanges, and bindings declared in the same `ApplicationContext`.
+These components are declared as soon as a `Connection` is opened to the broker.
+There are some namespace features that make this very convenient — for example,
+in the Stocks sample application, we have the following:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+In the preceding example, we use anonymous queues (actually, internally, just queues with names generated by the framework, not by the broker) and refer to them by ID.
+We can also declare queues with explicit names, which also serve as identifiers for their bean definitions in the context.
+The following example configures a queue with an explicit name:
+
+```
+
+```
+
+| |You can provide both `id` and `name` attributes. This lets you refer to the queue (for example, in a binding) by an ID that is independent of the queue name. It also allows standard Spring features (such as property placeholders and SpEL expressions for the queue name). These features are not available when you use the name as the bean identifier.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Queues can be configured with additional arguments — for example, `x-message-ttl`.
+When you use the namespace support, they are provided in the form of a `Map` of argument-name/argument-value pairs, which are defined by using the `` element.
+The following example shows how to do so:
+
+```
+
+
+
+
+
+
+```
+
+By default, the arguments are assumed to be strings.
+For arguments of other types, you must provide the type.
+The following example shows how to specify the type:
+
+```
+
+
+
+
+
+```
+
+When providing arguments of mixed types, you must provide the type for each entry element.
+The following example shows how to do so:
+
+```
+
+
+
+ 100
+
+
+
+
+
+```
+
+With Spring Framework 3.2 and later, this can be declared a little more succinctly, as follows:
+
+```
+
+
+
+
+
+
+```
+
+When you use Java configuration, the `Queue.X_QUEUE_LEADER_LOCATOR` argument is supported as a first class property through the `setLeaderLocator()` method on the `Queue` class.
+Starting with version 2.1, anonymous queues are declared with this property set to `client-local` by default.
+This ensures that the queue is declared on the node the application is connected to.
+
+| |The RabbitMQ broker does not allow declaration of a queue with mismatched arguments. For example, if a `queue` already exists with no `time to live` argument, and you attempt to declare it with (for example) `key="x-message-ttl" value="100"`, an exception is thrown.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+By default, the `RabbitAdmin` immediately stops processing all declarations when any exception occurs.
+This could cause downstream issues, such as a listener container failing to initialize because another queue (defined after the one in error) is not declared.
+
+This behavior can be modified by setting the `ignore-declaration-exceptions` attribute to `true` on the `RabbitAdmin` instance.
+This option instructs the `RabbitAdmin` to log the exception and continue declaring other elements.
+When configuring the `RabbitAdmin` using Java, this property is called `ignoreDeclarationExceptions`.
+This is a global setting that applies to all elements.
+Queues, exchanges, and bindings have a similar property that applies to just those elements.
+
+Prior to version 1.6, this property took effect only if an `IOException` occurred on the channel, such as when there is a mismatch between current and desired properties.
+Now, this property takes effect on any exception, including `TimeoutException` and others.
+
+In addition, any declaration exceptions result in the publishing of a `DeclarationExceptionEvent`, which is an `ApplicationEvent` that can be consumed by any `ApplicationListener` in the context.
+The event contains a reference to the admin, the element that was being declared, and the `Throwable`.
+
+##### Headers Exchange
+
+Starting with version 1.3, you can configure the `HeadersExchange` to match on multiple headers.
+You can also specify whether any or all headers must match.
+The following example shows how to do so:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Starting with version 1.6, you can configure `Exchanges` with an `internal` flag (defaults to `false`) and such an`Exchange` is properly configured on the Broker through a `RabbitAdmin` (if one is present in the application context).
+If the `internal` flag is `true` for an exchange, RabbitMQ does not let clients use the exchange.
+This is useful for a dead letter exchange or exchange-to-exchange binding, where you do not wish the exchange to be used
+directly by publishers.
+
+To see how to use Java to configure the AMQP infrastructure, look at the Stock sample application,
+where there is the `@Configuration` class `AbstractStockRabbitConfiguration`, which ,in turn has`RabbitClientConfiguration` and `RabbitServerConfiguration` subclasses.
+The following listing shows the code for `AbstractStockRabbitConfiguration`:
+
+```
+@Configuration
+public abstract class AbstractStockAppRabbitConfiguration {
+
+ @Bean
+ public CachingConnectionFactory connectionFactory() {
+ CachingConnectionFactory connectionFactory =
+ new CachingConnectionFactory("localhost");
+ connectionFactory.setUsername("guest");
+ connectionFactory.setPassword("guest");
+ return connectionFactory;
+ }
+
+ @Bean
+ public RabbitTemplate rabbitTemplate() {
+ RabbitTemplate template = new RabbitTemplate(connectionFactory());
+ template.setMessageConverter(jsonMessageConverter());
+ configureRabbitTemplate(template);
+ return template;
+ }
+
+ @Bean
+ public Jackson2JsonMessageConverter jsonMessageConverter() {
+ return new Jackson2JsonMessageConverter();
+ }
+
+ @Bean
+ public TopicExchange marketDataExchange() {
+ return new TopicExchange("app.stock.marketdata");
+ }
+
+ // additional code omitted for brevity
+
+}
+```
+
+In the Stock application, the server is configured by using the following `@Configuration` class:
+
+```
+@Configuration
+public class RabbitServerConfiguration extends AbstractStockAppRabbitConfiguration {
+
+ @Bean
+ public Queue stockRequestQueue() {
+ return new Queue("app.stock.request");
+ }
+}
+```
+
+This is the end of the whole inheritance chain of `@Configuration` classes.
+The end result is that `TopicExchange` and `Queue` are declared to the broker upon application startup.
+There is no binding of `TopicExchange` to a queue in the server configuration, as that is done in the client application.
+The stock request queue, however, is automatically bound to the AMQP default exchange.
+This behavior is defined by the specification.
+
+The client `@Configuration` class is a little more interesting.
+Its declaration follows:
+
+```
+@Configuration
+public class RabbitClientConfiguration extends AbstractStockAppRabbitConfiguration {
+
+ @Value("${stocks.quote.pattern}")
+ private String marketDataRoutingKey;
+
+ @Bean
+ public Queue marketDataQueue() {
+ return amqpAdmin().declareQueue();
+ }
+
+ /**
+ * Binds to the market data exchange.
+ * Interested in any stock quotes
+ * that match its routing key.
+ */
+ @Bean
+ public Binding marketDataBinding() {
+ return BindingBuilder.bind(
+ marketDataQueue()).to(marketDataExchange()).with(marketDataRoutingKey);
+ }
+
+ // additional code omitted for brevity
+
+}
+```
+
+The client declares another queue through the `declareQueue()` method on the `AmqpAdmin`.
+It binds that queue to the market data exchange with a routing pattern that is externalized in a properties file.
+
+##### Builder API for Queues and Exchanges
+
+Version 1.6 introduces a convenient fluent API for configuring `Queue` and `Exchange` objects when using Java configuration.
+The following example shows how to use it:
+
+```
+@Bean
+public Queue queue() {
+ return QueueBuilder.nonDurable("foo")
+ .autoDelete()
+ .exclusive()
+ .withArgument("foo", "bar")
+ .build();
+}
+
+@Bean
+public Exchange exchange() {
+ return ExchangeBuilder.directExchange("foo")
+ .autoDelete()
+ .internal()
+ .withArgument("foo", "bar")
+ .build();
+}
+```
+
+See the Javadoc for [`org.springframework.amqp.core.QueueBuilder`](https://docs.spring.io/spring-amqp/docs/latest-ga/api/org/springframework/amqp/core/QueueBuilder.html) and [`org.springframework.amqp.core.ExchangeBuilder`](https://docs.spring.io/spring-amqp/docs/latest-ga/api/org/springframework/amqp/core/ExchangeBuilder.html) for more information.
+
+Starting with version 2.0, the `ExchangeBuilder` now creates durable exchanges by default, to be consistent with the simple constructors on the individual `AbstractExchange` classes.
+To make a non-durable exchange with the builder, use `.durable(false)` before invoking `.build()`.
+The `durable()` method with no parameter is no longer provided.
+
+Version 2.2 introduced fluent APIs to add "well known" exchange and queue arguments…
+
+```
+@Bean
+public Queue allArgs1() {
+ return QueueBuilder.nonDurable("all.args.1")
+ .ttl(1000)
+ .expires(200_000)
+ .maxLength(42)
+ .maxLengthBytes(10_000)
+ .overflow(Overflow.rejectPublish)
+ .deadLetterExchange("dlx")
+ .deadLetterRoutingKey("dlrk")
+ .maxPriority(4)
+ .lazy()
+ .leaderLocator(LeaderLocator.minLeaders)
+ .singleActiveConsumer()
+ .build();
+}
+
+@Bean
+public DirectExchange ex() {
+ return ExchangeBuilder.directExchange("ex.with.alternate")
+ .durable(true)
+ .alternate("alternate")
+ .build();
+}
+```
+
+##### Declaring Collections of Exchanges, Queues, and Bindings
+
+You can wrap collections of `Declarable` objects (`Queue`, `Exchange`, and `Binding`) in `Declarables` objects.
+The `RabbitAdmin` detects such beans (as well as discrete `Declarable` beans) in the application context, and declares the contained objects on the broker whenever a connection is established (initially and after a connection failure).
+The following example shows how to do so:
+
+```
+@Configuration
+public static class Config {
+
+ @Bean
+ public CachingConnectionFactory cf() {
+ return new CachingConnectionFactory("localhost");
+ }
+
+ @Bean
+ public RabbitAdmin admin(ConnectionFactory cf) {
+ return new RabbitAdmin(cf);
+ }
+
+ @Bean
+ public DirectExchange e1() {
+ return new DirectExchange("e1", false, true);
+ }
+
+ @Bean
+ public Queue q1() {
+ return new Queue("q1", false, false, true);
+ }
+
+ @Bean
+ public Binding b1() {
+ return BindingBuilder.bind(q1()).to(e1()).with("k1");
+ }
+
+ @Bean
+ public Declarables es() {
+ return new Declarables(
+ new DirectExchange("e2", false, true),
+ new DirectExchange("e3", false, true));
+ }
+
+ @Bean
+ public Declarables qs() {
+ return new Declarables(
+ new Queue("q2", false, false, true),
+ new Queue("q3", false, false, true));
+ }
+
+ @Bean
+ @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
+ public Declarables prototypes() {
+ return new Declarables(new Queue(this.prototypeQueueName, false, false, true));
+ }
+
+ @Bean
+ public Declarables bs() {
+ return new Declarables(
+ new Binding("q2", DestinationType.QUEUE, "e2", "k2", null),
+ new Binding("q3", DestinationType.QUEUE, "e3", "k3", null));
+ }
+
+ @Bean
+ public Declarables ds() {
+ return new Declarables(
+ new DirectExchange("e4", false, true),
+ new Queue("q4", false, false, true),
+ new Binding("q4", DestinationType.QUEUE, "e4", "k4", null));
+ }
+
+}
+```
+
+| |In versions prior to 2.1, you could declare multiple `Declarable` instances by defining beans of type `Collection`. This can cause undesirable side effects in some cases, because the admin has to iterate over all `Collection>` beans. This feature is now disabled in favor of `Declarables`, as discussed earlier in this section. You can revert to the previous behavior by setting the `RabbitAdmin` property called `declareCollections` to `true`.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Version 2.2 added the `getDeclarablesByType` method to `Declarables`; this can be used as a convenience, for example, when declaring the listener container bean(s).
+
+```
+public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
+ Declarables mixedDeclarables, MessageListener listener) {
+
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
+ container.setQueues(mixedDeclarables.getDeclarablesByType(Queue.class).toArray(new Queue[0]));
+ container.setMessageListener(listener);
+ return container;
+}
+```
+
+##### Conditional Declaration
+
+By default, all queues, exchanges, and bindings are declared by all `RabbitAdmin` instances (assuming they have `auto-startup="true"`) in the application context.
+
+Starting with version 2.1.9, the `RabbitAdmin` has a new property `explicitDeclarationsOnly` (which is `false` by default); when this is set to `true`, the admin will only declare beans that are explicitly configured to be declared by that admin.
+
+| |Starting with the 1.2 release, you can conditionally declare these elements. This is particularly useful when an application connects to multiple brokers and needs to specify with which brokers a particular element should be declared.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The classes representing these elements implement `Declarable`, which has two methods: `shouldDeclare()` and `getDeclaringAdmins()`.
+The `RabbitAdmin` uses these methods to determine whether a particular instance should actually process the declarations on its `Connection`.
+
+The properties are available as attributes in the namespace, as shown in the following examples:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+| |By default, the `auto-declare` attribute is `true` and, if the `declared-by` is not supplied (or is empty), then all `RabbitAdmin` instances declare the object (as long as the admin’s `auto-startup` attribute is `true`, the default, and the admin’s `explicit-declarations-only` attribute is false).|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Similarly, you can use Java-based `@Configuration` to achieve the same effect.
+In the following example, the components are declared by `admin1` but not by`admin2`:
+
+```
+@Bean
+public RabbitAdmin admin1() {
+ return new RabbitAdmin(cf1());
+}
+
+@Bean
+public RabbitAdmin admin2() {
+ return new RabbitAdmin(cf2());
+}
+
+@Bean
+public Queue queue() {
+ Queue queue = new Queue("foo");
+ queue.setAdminsThatShouldDeclare(admin1());
+ return queue;
+}
+
+@Bean
+public Exchange exchange() {
+ DirectExchange exchange = new DirectExchange("bar");
+ exchange.setAdminsThatShouldDeclare(admin1());
+ return exchange;
+}
+
+@Bean
+public Binding binding() {
+ Binding binding = new Binding("foo", DestinationType.QUEUE, exchange().getName(), "foo", null);
+ binding.setAdminsThatShouldDeclare(admin1());
+ return binding;
+}
+```
+
+##### A Note On the `id` and `name` Attributes
+
+The `name` attribute on ` ` and ` ` elements reflects the name of the entity in the broker.
+For queues, if the `name` is omitted, an anonymous queue is created (see [`AnonymousQueue`](#anonymous-queue)).
+
+In versions prior to 2.0, the `name` was also registered as a bean name alias (similar to `name` on ` ` elements).
+
+This caused two problems:
+
+* It prevented the declaration of a queue and exchange with the same name.
+
+* The alias was not resolved if it contained a SpEL expression (`#{…}`).
+
+Starting with version 2.0, if you declare one of these elements with both an `id` *and* a `name` attribute, the name is no longer declared as a bean name alias.
+If you wish to declare a queue and exchange with the same `name`, you must provide an `id`.
+
+There is no change if the element has only a `name` attribute.
+The bean can still be referenced by the `name` — for example, in binding declarations.
+However, you still cannot reference it if the name contains SpEL — you must provide an `id` for reference purposes.
+
+##### `AnonymousQueue`
+
+In general, when you need a uniquely-named, exclusive, auto-delete queue, we recommend that you use the `AnonymousQueue`instead of broker-defined queue names (using `""` as a `Queue` name causes the broker to generate the queue
+name).
+
+This is because:
+
+1. The queues are actually declared when the connection to the broker is established.
+ This is long after the beans are created and wired together.
+ Beans that use the queue need to know its name.
+ In fact, the broker might not even be running when the application is started.
+
+2. If the connection to the broker is lost for some reason, the admin re-declares the `AnonymousQueue` with the same name.
+ If we used broker-declared queues, the queue name would change.
+
+You can control the format of the queue name used by `AnonymousQueue` instances.
+
+By default, the queue name is prefixed by `spring.gen-` followed by a base64 representation of the `UUID` — for example: `spring.gen-MRBv9sqISkuCiPfOYfpo4g`.
+
+You can provide an `AnonymousQueue.NamingStrategy` implementation in a constructor argument.
+The following example shows how to do so:
+
+```
+@Bean
+public Queue anon1() {
+ return new AnonymousQueue();
+}
+
+@Bean
+public Queue anon2() {
+ return new AnonymousQueue(new AnonymousQueue.Base64UrlNamingStrategy("something-"));
+}
+
+@Bean
+public Queue anon3() {
+ return new AnonymousQueue(AnonymousQueue.UUIDNamingStrategy.DEFAULT);
+}
+```
+
+The first bean generates a queue name prefixed by `spring.gen-` followed by a base64 representation of the `UUID` — for
+example: `spring.gen-MRBv9sqISkuCiPfOYfpo4g`.
+The second bean generates a queue name prefixed by `something-` followed by a base64 representation of the `UUID`.
+The third bean generates a name by using only the UUID (no base64 conversion) — for example, `f20c818a-006b-4416-bf91-643590fedb0e`.
+
+The base64 encoding uses the “URL and Filename Safe Alphabet” from RFC 4648.
+Trailing padding characters (`=`) are removed.
+
+You can provide your own naming strategy, whereby you can include other information (such as the application name or client host) in the queue name.
+
+You can specify the naming strategy when you use XML configuration.
+The `naming-strategy` attribute is present on the `` element
+for a bean reference that implements `AnonymousQueue.NamingStrategy`.
+The following examples show how to specify the naming strategy in various ways:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The first example creates names such as `spring.gen-MRBv9sqISkuCiPfOYfpo4g`.
+The second example creates names with a String representation of a UUID.
+The third example creates names such as `custom.gen-MRBv9sqISkuCiPfOYfpo4g`.
+
+You can also provide your own naming strategy bean.
+
+Starting with version 2.1, anonymous queues are declared with argument `Queue.X_QUEUE_LEADER_LOCATOR` set to `client-local` by default.
+This ensures that the queue is declared on the node to which the application is connected.
+You can revert to the previous behavior by calling `queue.setLeaderLocator(null)` after constructing the instance.
+
+##### Recovering Auto-Delete Declarations
+
+Normally, the `RabbitAdmin` (s) only recover queues/exchanges/bindings that are declared as beans in the application context; if any such declarations are auto-delete, they will be removed by the broker if the connection is lost.
+When the connection is re-established, the admin will redeclare the entities.
+Normally, entities created by calling `admin.declareQueue(…)`, `admin.declareExchange(…)` and `admin.declareBinding(…)` will not be recovered.
+
+Starting with version 2.4, the admin has a new property `redeclareManualDeclarations`; when true, the admin will recover these entities in addition to the beans in the application context.
+
+Recovery of individual declarations will not be performed if `deleteQueue(…)`, `deleteExchange(…)` or `removeBinding(…)` is called.
+Associated bindings are removed from the recoverable entities when queues and exchanges are deleted.
+
+Finally, calling `resetAllManualDeclarations()` will prevent the recovery of any previously declared entities.
+
+#### 4.1.12. Broker Event Listener
+
+When the [Event Exchange Plugin](https://www.rabbitmq.com/event-exchange.html) is enabled, if you add a bean of type `BrokerEventListener` to the application context, it publishes selected broker events as `BrokerEvent` instances, which can be consumed with a normal Spring `ApplicationListener` or `@EventListener` method.
+Events are published by the broker to a topic exchange `amq.rabbitmq.event` with a different routing key for each event type.
+The listener uses event keys, which are used to bind an `AnonymousQueue` to the exchange so the listener receives only selected events.
+Since it is a topic exchange, wildcards can be used (as well as explicitly requesting specific events), as the following example shows:
+
+```
+@Bean
+public BrokerEventListener eventListener() {
+ return new BrokerEventListener(connectionFactory(), "user.deleted", "channel.#", "queue.#");
+}
+```
+
+You can further narrow the received events in individual event listeners, by using normal Spring techniques, as the following example shows:
+
+```
+@EventListener(condition = "event.eventType == 'queue.created'")
+public void listener(BrokerEvent event) {
+ ...
+}
+```
+
+#### 4.1.13. Delayed Message Exchange
+
+Version 1.6 introduces support for the[Delayed Message Exchange Plugin](https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/)
+
+| |The plugin is currently marked as experimental but has been available for over a year (at the time of writing). If changes to the plugin make it necessary, we plan to add support for such changes as soon as practical. For that reason, this support in Spring AMQP should be considered experimental, too. This functionality was tested with RabbitMQ 3.6.0 and version 0.0.1 of the plugin.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+To use a `RabbitAdmin` to declare an exchange as delayed, you can set the `delayed` property on the exchange bean to`true`.
+The `RabbitAdmin` uses the exchange type (`Direct`, `Fanout`, and so on) to set the `x-delayed-type` argument and
+declare the exchange with type `x-delayed-message`.
+
+The `delayed` property (default: `false`) is also available when configuring exchange beans using XML.
+The following example shows how to use it:
+
+```
+
+```
+
+To send a delayed message, you can set the `x-delay` header through `MessageProperties`, as the following examples show:
+
+```
+MessageProperties properties = new MessageProperties();
+properties.setDelay(15000);
+template.send(exchange, routingKey,
+ MessageBuilder.withBody("foo".getBytes()).andProperties(properties).build());
+```
+
+```
+rabbitTemplate.convertAndSend(exchange, routingKey, "foo", new MessagePostProcessor() {
+
+ @Override
+ public Message postProcessMessage(Message message) throws AmqpException {
+ message.getMessageProperties().setDelay(15000);
+ return message;
+ }
+
+});
+```
+
+To check if a message was delayed, use the `getReceivedDelay()` method on the `MessageProperties`.
+It is a separate property to avoid unintended propagation to an output message generated from an input message.
+
+#### 4.1.14. RabbitMQ REST API
+
+When the management plugin is enabled, the RabbitMQ server exposes a REST API to monitor and configure the broker.
+A [Java Binding for the API](https://github.com/rabbitmq/hop) is now provided.
+The `com.rabbitmq.http.client.Client` is a standard, immediate, and, therefore, blocking API.
+It is based on the [Spring Web](https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#spring-web) module and its `RestTemplate` implementation.
+On the other hand, the `com.rabbitmq.http.client.ReactorNettyClient` is a reactive, non-blocking implementation based on the [Reactor Netty](https://projectreactor.io/docs/netty/release/reference/docs/index.html) project.
+
+The hop dependency (`com.rabbitmq:http-client`) is now also `optional`.
+
+See their Javadoc for more information.
+
+#### 4.1.15. Exception Handling
+
+Many operations with the RabbitMQ Java client can throw checked exceptions.
+For example, there are a lot of cases where `IOException` instances may be thrown.
+The `RabbitTemplate`, `SimpleMessageListenerContainer`, and other Spring AMQP components catch those exceptions and convert them into one of the exceptions within `AmqpException` hierarchy.
+Those are defined in the 'org.springframework.amqp' package, and `AmqpException` is the base of the hierarchy.
+
+When a listener throws an exception, it is wrapped in a `ListenerExecutionFailedException`.
+Normally the message is rejected and requeued by the broker.
+Setting `defaultRequeueRejected` to `false` causes messages to be discarded (or routed to a dead letter exchange).
+As discussed in [Message Listeners and the Asynchronous Case](#async-listeners), the listener can throw an `AmqpRejectAndDontRequeueException` (or `ImmediateRequeueAmqpException`) to conditionally control this behavior.
+
+However, there is a class of errors where the listener cannot control the behavior.
+When a message that cannot be converted is encountered (for example, an invalid `content_encoding` header), some exceptions are thrown before the message reaches user code.
+With `defaultRequeueRejected` set to `true` (default) (or throwing an `ImmediateRequeueAmqpException`), such messages would be redelivered over and over.
+Before version 1.3.2, users needed to write a custom `ErrorHandler`, as discussed in [Exception Handling](#exception-handling), to avoid this situation.
+
+Starting with version 1.3.2, the default `ErrorHandler` is now a `ConditionalRejectingErrorHandler` that rejects (and does not requeue) messages that fail with an irrecoverable error.
+Specifically, it rejects messages that fail with the following errors:
+
+* `o.s.amqp…MessageConversionException`: Can be thrown when converting the incoming message payload using a `MessageConverter`.
+
+* `o.s.messaging…MessageConversionException`: Can be thrown by the conversion service if additional conversion is required when mapping to a `@RabbitListener` method.
+
+* `o.s.messaging…MethodArgumentNotValidException`: Can be thrown if validation (for example, `@Valid`) is used in the listener and the validation fails.
+
+* `o.s.messaging…MethodArgumentTypeMismatchException`: Can be thrown if the inbound message was converted to a type that is not correct for the target method.
+ For example, the parameter is declared as `Message` but `Message` is received.
+
+* `java.lang.NoSuchMethodException`: Added in version 1.6.3.
+
+* `java.lang.ClassCastException`: Added in version 1.6.3.
+
+You can configure an instance of this error handler with a `FatalExceptionStrategy` so that users can provide their own rules for conditional message rejection — for example, a delegate implementation to the `BinaryExceptionClassifier` from Spring Retry ([Message Listeners and the Asynchronous Case](#async-listeners)).
+In addition, the `ListenerExecutionFailedException` now has a `failedMessage` property that you can use in the decision.
+If the `FatalExceptionStrategy.isFatal()` method returns `true`, the error handler throws an `AmqpRejectAndDontRequeueException`.
+The default `FatalExceptionStrategy` logs a warning message when an exception is determined to be fatal.
+
+Since version 1.6.3, a convenient way to add user exceptions to the fatal list is to subclass `ConditionalRejectingErrorHandler.DefaultExceptionStrategy` and override the `isUserCauseFatal(Throwable cause)` method to return `true` for fatal exceptions.
+
+A common pattern for handling DLQ messages is to set a `time-to-live` on those messages as well as additional DLQ configuration such that these messages expire and are routed back to the main queue for retry.
+The problem with this technique is that messages that cause fatal exceptions loop forever.
+Starting with version 2.1, the `ConditionalRejectingErrorHandler` detects an `x-death` header on a message that causes a fatal exception to be thrown.
+The message is logged and discarded.
+You can revert to the previous behavior by setting the `discardFatalsWithXDeath` property on the `ConditionalRejectingErrorHandler` to `false`.
+
+| |Starting with version 2.1.9, messages with these fatal exceptions are rejected and NOT requeued by default, even if the container acknowledge mode is MANUAL. These exceptions generally occur before the listener is invoked so the listener does not have a chance to ack or nack the message so it remained in the queue in an un-acked state. To revert to the previous behavior, set the `rejectManual` property on the `ConditionalRejectingErrorHandler` to `false`.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### 4.1.16. Transactions
+
+The Spring Rabbit framework has support for automatic transaction management in the synchronous and asynchronous use cases with a number of different semantics that can be selected declaratively, as is familiar to existing users of Spring transactions.
+This makes many if not most common messaging patterns easy to implement.
+
+There are two ways to signal the desired transaction semantics to the framework.
+In both the `RabbitTemplate` and `SimpleMessageListenerContainer`, there is a flag `channelTransacted` which, if `true`, tells the framework to use a transactional channel and to end all operations (send or receive) with a commit or rollback (depending on the outcome), with an exception signaling a rollback.
+Another signal is to provide an external transaction with one of Spring’s `PlatformTransactionManager` implementations as a context for the ongoing operation.
+If there is already a transaction in progress when the framework is sending or receiving a message, and the `channelTransacted` flag is `true`, the commit or rollback of the messaging transaction is deferred until the end of the current transaction.
+If the `channelTransacted` flag is `false`, no transaction semantics apply to the messaging operation (it is auto-acked).
+
+The `channelTransacted` flag is a configuration time setting.
+It is declared and processed once when the AMQP components are created, usually at application startup.
+The external transaction is more dynamic in principle because the system responds to the current thread state at runtime.
+However, in practice, it is often also a configuration setting, when the transactions are layered onto an application declaratively.
+
+For synchronous use cases with `RabbitTemplate`, the external transaction is provided by the caller, either declaratively or imperatively according to taste (the usual Spring transaction model).
+The following example shows a declarative approach (usually preferred because it is non-invasive), where the template has been configured with `channelTransacted=true`:
+
+```
+@Transactional
+public void doSomething() {
+ String incoming = rabbitTemplate.receiveAndConvert();
+ // do some more database processing...
+ String outgoing = processInDatabaseAndExtractReply(incoming);
+ rabbitTemplate.convertAndSend(outgoing);
+}
+```
+
+In the preceding example, a `String` payload is received, converted, and sent as a message body inside a method marked as `@Transactional`.
+If the database processing fails with an exception, the incoming message is returned to the broker, and the outgoing message is not sent.
+This applies to any operations with the `RabbitTemplate` inside a chain of transactional methods (unless, for instance, the `Channel` is directly manipulated to commit the transaction early).
+
+For asynchronous use cases with `SimpleMessageListenerContainer`, if an external transaction is needed, it has to be requested by the container when it sets up the listener.
+To signal that an external transaction is required, the user provides an implementation of `PlatformTransactionManager` to the container when it is configured.
+The following example shows how to do so:
+
+```
+@Configuration
+public class ExampleExternalTransactionAmqpConfiguration {
+
+ @Bean
+ public SimpleMessageListenerContainer messageListenerContainer() {
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
+ container.setConnectionFactory(rabbitConnectionFactory());
+ container.setTransactionManager(transactionManager());
+ container.setChannelTransacted(true);
+ container.setQueueName("some.queue");
+ container.setMessageListener(exampleListener());
+ return container;
+ }
+
+}
+```
+
+In the preceding example, the transaction manager is added as a dependency injected from another bean definition (not shown), and the `channelTransacted` flag is also set to `true`.
+The effect is that if the listener fails with an exception, the transaction is rolled back, and the message is also returned to the broker.
+Significantly, if the transaction fails to commit (for example, because of
+a database constraint error or connectivity problem), the AMQP transaction is also rolled back, and the message is returned to the broker.
+This is sometimes known as a “Best Efforts 1 Phase Commit”, and is a very powerful pattern for reliable messaging.
+If the `channelTransacted` flag was set to `false` (the default) in the preceding example, the external transaction would still be provided for the listener, but all messaging operations would be auto-acked, so the effect is to commit the messaging operations even on a rollback of the business operation.
+
+##### Conditional Rollback
+
+Prior to version 1.6.6, adding a rollback rule to a container’s `transactionAttribute` when using an external transaction manager (such as JDBC) had no effect.
+Exceptions always rolled back the transaction.
+
+Also, when using a [transaction advice](https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/html/transaction.html#transaction-declarative) in the container’s advice chain, conditional rollback was not very useful, because all listener exceptions are wrapped in a `ListenerExecutionFailedException`.
+
+The first problem has been corrected, and the rules are now applied properly.
+Further, the `ListenerFailedRuleBasedTransactionAttribute` is now provided.
+It is a subclass of `RuleBasedTransactionAttribute`, with the only difference being that it is aware of the `ListenerExecutionFailedException` and uses the cause of such exceptions for the rule.
+This transaction attribute can be used directly in the container or through a transaction advice.
+
+The following example uses this rule:
+
+```
+@Bean
+public AbstractMessageListenerContainer container() {
+ ...
+ container.setTransactionManager(transactionManager);
+ RuleBasedTransactionAttribute transactionAttribute =
+ new ListenerFailedRuleBasedTransactionAttribute();
+ transactionAttribute.setRollbackRules(Collections.singletonList(
+ new NoRollbackRuleAttribute(DontRollBackException.class)));
+ container.setTransactionAttribute(transactionAttribute);
+ ...
+}
+```
+
+##### A note on Rollback of Received Messages
+
+AMQP transactions apply only to messages and acks sent to the broker.
+Consequently, when there is a rollback of a Spring transaction and a message has been received, Spring AMQP has to not only rollback the transaction but also manually reject the message (sort of a nack, but that is not what the specification calls it).
+The action taken on message rejection is independent of transactions and depends on the `defaultRequeueRejected` property (default: `true`).
+For more information about rejecting failed messages, see [Message Listeners and the Asynchronous Case](#async-listeners).
+
+For more information about RabbitMQ transactions and their limitations, see [RabbitMQ Broker Semantics](https://www.rabbitmq.com/semantics.html).
+
+| |Prior to RabbitMQ 2.7.0, such messages (and any that are unacked when a channel is closed or aborts) went to the back of the queue on a Rabbit broker. Since 2.7.0, rejected messages go to the front of the queue, in a similar manner to JMS rolled back messages.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |Previously, message requeue on transaction rollback was inconsistent between local transactions and when a `TransactionManager` was provided. In the former case, the normal requeue logic (`AmqpRejectAndDontRequeueException` or `defaultRequeueRejected=false`) applied (see [Message Listeners and the Asynchronous Case](#async-listeners)). With a transaction manager, the message was unconditionally requeued on rollback. Starting with version 2.0, the behavior is consistent and the normal requeue logic is applied in both cases. To revert to the previous behavior, you can set the container’s `alwaysRequeueWithTxManagerRollback` property to `true`. See [Message Listener Container Configuration](#containerAttributes).|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### Using `RabbitTransactionManager`
+
+The [RabbitTransactionManager](https://docs.spring.io/spring-amqp/docs/latest_ga/api/org/springframework/amqp/rabbit/transaction/RabbitTransactionManager.html) is an alternative to executing Rabbit operations within, and synchronized with, external transactions.
+This transaction manager is an implementation of the [`PlatformTransactionManager`](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/transaction/PlatformTransactionManager.html) interface and should be used with a single Rabbit `ConnectionFactory`.
+
+| |This strategy is not able to provide XA transactions — for example, in order to share transactions between messaging and database access.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------|
+
+Application code is required to retrieve the transactional Rabbit resources through `ConnectionFactoryUtils.getTransactionalResourceHolder(ConnectionFactory, boolean)` instead of a standard `Connection.createChannel()` call with subsequent channel creation.
+When using Spring AMQP’s [RabbitTemplate](https://docs.spring.io/spring-amqp/docs/latest_ga/api/org/springframework/amqp/rabbit/core/RabbitTemplate.html), it will autodetect a thread-bound Channel and automatically participate in its transaction.
+
+With Java Configuration, you can setup a new RabbitTransactionManager by using the following bean:
+
+```
+@Bean
+public RabbitTransactionManager rabbitTransactionManager() {
+ return new RabbitTransactionManager(connectionFactory);
+}
+```
+
+If you prefer XML configuration, you can declare the following bean in your XML Application Context file:
+
+```
+
+
+
+```
+
+##### Transaction Synchronization
+
+Synchronizing a RabbitMQ transaction with some other (e.g. DBMS) transaction provides "Best Effort One Phase Commit" semantics.
+It is possible that the RabbitMQ transaction fails to commit during the after completion phase of transaction synchronization.
+This is logged by the `spring-tx` infrastructure as an error, but no exception is thrown to the calling code.
+Starting with version 2.3.10, you can call `ConnectionUtils.checkAfterCompletion()` after the transaction has committed on the same thread that processed the transaction.
+It will simply return if no exception occurred; otherwise it will throw an `AfterCompletionFailedException` which will have a property representing the synchronization status of the completion.
+
+Enable this feature by calling `ConnectionFactoryUtils.enableAfterCompletionFailureCapture(true)`; this is a global flag and applies to all threads.
+
+#### 4.1.17. Message Listener Container Configuration
+
+There are quite a few options for configuring a `SimpleMessageListenerContainer` (SMLC) and a `DirectMessageListenerContainer` (DMLC) related to transactions and quality of service, and some of them interact with each other.
+Properties that apply to the SMLC, DMLC, or `StreamListenerContainer` (StLC) (see [Using the RabbitMQ Stream Plugin](#stream-support) are indicated by the check mark in the appropriate column.
+See [Choosing a Container](#choose-container) for information to help you decide which container is appropriate for your application.
+
+The following table shows the container property names and their equivalent attribute names (in parentheses) when using the namespace to configure a ` `.
+The `type` attribute on that element can be `simple` (default) or `direct` to specify an `SMLC` or `DMLC` respectively.
+Some properties are not exposed by the namespace.
+These are indicated by `N/A` for the attribute.
+
+| Property (Attribute) | Description | SMLC | DMLC | StLC |
+|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|--------------------------------|--------------------------------|
+| | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| | Prior to version 1.6, if there was more than one admin in the context, the container would randomly select one. If there were no admins, it would create one internally. In either case, this could cause unexpected results. Starting with version 1.6, for `autoDeclare` to work, there must be exactly one `RabbitAdmin` in the context, or a reference to a specific instance must be configured on the container using the `rabbitAdmin` property. | | | |
+| |
+| | | |
+| | |
+| | |
+| | | |
+| | | |
+| | |
+| | | |
+| | | |
+| | | |
+| |
+| | | |
+| | |
+| | |
+| | |
+| | |
+| | | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| (group) | This is available only when using the namespace. When specified, a bean of type `Collection` is registered with this name, and the container for each ` ` element is added to the collection. This allows, for example, starting and stopping the group of containers by iterating over the collection. If multiple ` ` elements have the same group value, the containers in the collection form an aggregate of all containers so designated. |![tickmark](https://docs.spring.io/spring-amqp/docs/current/reference/html/images/tickmark.png)|![tickmark](https://docs.spring.io/spring-amqp/docs/current/reference/html/images/tickmark.png)| |
+| | |
+| | |
+| | | |
+| | |
+| | |
+| | If the broker is not available during initial startup, the container starts and the conditions are checked when the connection is established. | | | |
+| | The check is done against all queues in the context, not just the queues that a particular listener is configured to use. If you wish to limit the checks to just those queues used by a container, you should configure a separate `RabbitAdmin` for the container, and provide a reference to it using the `rabbitAdmin` property. See [Conditional Declaration](#conditional-declaration) for more information. | | | |
+| | Mismatched queue argument detection is disabled while starting a container for a `@RabbitListener` in a bean that is marked `@Lazy`. This is to avoid a potential deadlock which can delay the start of such containers for up to 60 seconds. Applications using lazy listener beans should check the queue arguments before getting a reference to the lazy bean. | | | |
+| | |
+| | Missing queue detection is disabled while starting a container for a `@RabbitListener` in a bean that is marked `@Lazy`. This is to avoid a potential deadlock which can delay the start of such containers for up to 60 seconds. Applications using lazy listener beans should check the queue(s) before getting a reference to the lazy bean. | | | |
+| | |
+| | |
+| | |
+|| |
+| | |
+| | There are scenarios where the prefetch value should be low — for example, with large messages, especially if the processing is slow (messages could add up to a large amount of memory in the client process), and if strict message ordering is necessary (the prefetch value should be set back to 1 in this case). Also, with low-volume messaging and multiple consumers (including concurrency within a single listener container instance), you may wish to reduce the prefetch to get a more even distribution of messages across consumers. | | | |
+| | |
+| | | |
+| | |
+| | |
+| | | |
+| | |
+| | | |
+| | |
+| | | |
+| |
+| | |
+| | |
+| | |
+
+#### 4.1.18. Listener Concurrency
+
+##### SimpleMessageListenerContainer
+
+By default, the listener container starts a single consumer that receives messages from the queues.
+
+When examining the table in the previous section, you can see a number of properties and attributes that control concurrency.
+The simplest is `concurrentConsumers`, which creates that (fixed) number of consumers that concurrently process messages.
+
+Prior to version 1.3.0, this was the only setting available and the container had to be stopped and started again to change the setting.
+
+Since version 1.3.0, you can now dynamically adjust the `concurrentConsumers` property.
+If it is changed while the container is running, consumers are added or removed as necessary to adjust to the new setting.
+
+In addition, a new property called `maxConcurrentConsumers` has been added and the container dynamically adjusts the concurrency based on workload.
+This works in conjunction with four additional properties: `consecutiveActiveTrigger`, `startConsumerMinInterval`, `consecutiveIdleTrigger`, and `stopConsumerMinInterval`.
+With the default settings, the algorithm to increase consumers works as follows:
+
+If the `maxConcurrentConsumers` has not been reached and an existing consumer is active for ten consecutive cycles AND at least 10 seconds has elapsed since the last consumer was started, a new consumer is started.
+A consumer is considered active if it received at least one message in `batchSize` \* `receiveTimeout` milliseconds.
+
+With the default settings, the algorithm to decrease consumers works as follows:
+
+If there are more than `concurrentConsumers` running and a consumer detects ten consecutive timeouts (idle) AND the last consumer was stopped at least 60 seconds ago, a consumer is stopped.
+The timeout depends on the `receiveTimeout` and the `batchSize` properties.
+A consumer is considered idle if it receives no messages in `batchSize` \* `receiveTimeout` milliseconds.
+So, with the default timeout (one second) and a `batchSize` of four, stopping a consumer is considered after 40 seconds of idle time (four timeouts correspond to one idle detection).
+
+| |Practically, consumers can be stopped only if the whole container is idle for some time. This is because the broker shares its work across all the active consumers.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Each consumer uses a single channel, regardless of the number of configured queues.
+
+Starting with version 2.0, the `concurrentConsumers` and `maxConcurrentConsumers` properties can be set with the `concurrency` property — for example, `2-4`.
+
+##### Using `DirectMessageListenerContainer`
+
+With this container, concurrency is based on the configured queues and `consumersPerQueue`.
+Each consumer for each queue uses a separate channel, and the concurrency is controlled by the rabbit client library.
+By default, at the time of writing, it uses a pool of `DEFAULT_NUM_THREADS = Runtime.getRuntime().availableProcessors() * 2` threads.
+
+You can configure a `taskExecutor` to provide the required maximum concurrency.
+
+#### 4.1.19. Exclusive Consumer
+
+Starting with version 1.3, you can configure the listener container with a single exclusive consumer.
+This prevents other containers from consuming from the queues until the current consumer is cancelled.
+The concurrency of such a container must be `1`.
+
+When using exclusive consumers, other containers try to consume from the queues according to the `recoveryInterval` property and log a `WARN` message if the attempt fails.
+
+#### 4.1.20. Listener Container Queues
+
+Version 1.3 introduced a number of improvements for handling multiple queues in a listener container.
+
+The container must be configured to listen on at least one queue.
+This was the case previously, too, but now queues can be added and removed at runtime.
+The container recycles (cancels and re-creates) the consumers when any pre-fetched messages have been processed.
+See the [Javadoc](https://docs.spring.io/spring-amqp/docs/latest-ga/api/org/springframework/amqp/rabbit/listener/AbstractMessageListenerContainer.html) for the `addQueues`, `addQueueNames`, `removeQueues` and `removeQueueNames` methods.
+When removing queues, at least one queue must remain.
+
+A consumer now starts if any of its queues are available.
+Previously, the container would stop if any queues were unavailable.
+Now, this is only the case if none of the queues are available.
+If not all queues are available, the container tries to passively declare (and consume from) the missing queues every 60 seconds.
+
+Also, if a consumer receives a cancel from the broker (for example, if a queue is deleted) the consumer tries to recover, and the recovered consumer continues to process messages from any other configured queues.
+Previously, a cancel on one queue cancelled the entire consumer and, eventually, the container would stop due to the missing queue.
+
+If you wish to permanently remove a queue, you should update the container before or after deleting to queue, to avoid future attempts trying to consume from it.
+
+#### 4.1.21. Resilience: Recovering from Errors and Broker Failures
+
+Some of the key (and most popular) high-level features that Spring AMQP provides are to do with recovery and automatic re-connection in the event of a protocol error or broker failure.
+We have seen all the relevant components already in this guide, but it should help to bring them all together here and call out the features and recovery scenarios individually.
+
+The primary reconnection features are enabled by the `CachingConnectionFactory` itself.
+It is also often beneficial to use the `RabbitAdmin` auto-declaration features.
+In addition, if you care about guaranteed delivery, you probably also need to use the `channelTransacted` flag in `RabbitTemplate` and `SimpleMessageListenerContainer` and the `AcknowledgeMode.AUTO` (or manual if you do the acks yourself) in the `SimpleMessageListenerContainer`.
+
+##### Automatic Declaration of Exchanges, Queues, and Bindings
+
+The `RabbitAdmin` component can declare exchanges, queues, and bindings on startup.
+It does this lazily, through a `ConnectionListener`.
+Consequently, if the broker is not present on startup, it does not matter.
+The first time a `Connection` is used (for example,
+by sending a message) the listener fires and the admin features is applied.
+A further benefit of doing the auto declarations in a listener is that, if the connection is dropped for any reason (for example,
+broker death, network glitch, and others), they are applied again when the connection is re-established.
+
+| |Queues declared this way must have fixed names — either explicitly declared or generated by the framework for `AnonymousQueue` instances. Anonymous queues are non-durable, exclusive, and auto-deleting.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| |Automatic declaration is performed only when the `CachingConnectionFactory` cache mode is `CHANNEL` (the default). This limitation exists because exclusive and auto-delete queues are bound to the connection.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Starting with version 2.2.2, the `RabbitAdmin` will detect beans of type `DeclarableCustomizer` and apply the function before actually processing the declaration.
+This is useful, for example, to set a new argument (property) before it has first class support within the framework.
+
+```
+@Bean
+public DeclarableCustomizer customizer() {
+ return dec -> {
+ if (dec instanceof Queue && ((Queue) dec).getName().equals("my.queue")) {
+ dec.addArgument("some.new.queue.argument", true);
+ }
+ return dec;
+ };
+}
+```
+
+It is also useful in projects that don’t provide direct access to the `Declarable` bean definitions.
+
+See also [RabbitMQ Automatic Connection/Topology recovery](#auto-recovery).
+
+##### Failures in Synchronous Operations and Options for Retry
+
+If you lose your connection to the broker in a synchronous sequence when using `RabbitTemplate` (for instance), Spring AMQP throws an `AmqpException` (usually, but not always, `AmqpIOException`).
+We do not try to hide the fact that there was a problem, so you have to be able to catch and respond to the exception.
+The easiest thing to do if you suspect that the connection was lost (and it was not your fault) is to try the operation again.
+You can do this manually, or you could look at using Spring Retry to handle the retry (imperatively or declaratively).
+
+Spring Retry provides a couple of AOP interceptors and a great deal of flexibility to specify the parameters of the retry (number of attempts, exception types, backoff algorithm, and others).
+Spring AMQP also provides some convenience factory beans for creating Spring Retry interceptors in a convenient form for AMQP use cases, with strongly typed callback interfaces that you can use to implement custom recovery logic.
+See the Javadoc and properties of `StatefulRetryOperationsInterceptor` and `StatelessRetryOperationsInterceptor` for more detail.
+Stateless retry is appropriate if there is no transaction or if a transaction is started inside the retry callback.
+Note that stateless retry is simpler to configure and analyze than stateful retry, but it is not usually appropriate if there is an ongoing transaction that must be rolled back or definitely is going to roll back.
+A dropped connection in the middle of a transaction should have the same effect as a rollback.
+Consequently, for reconnections where the transaction is started higher up the stack, stateful retry is usually the best choice.
+Stateful retry needs a mechanism to uniquely identify a message.
+The simplest approach is to have the sender put a unique value in the `MessageId` message property.
+The provided message converters provide an option to do this: you can set `createMessageIds` to `true`.
+Otherwise, you can inject a `MessageKeyGenerator` implementation into the interceptor.
+The key generator must return a unique key for each message.
+In versions prior to version 2.0, a `MissingMessageIdAdvice` was provided.
+It enabled messages without a `messageId` property to be retried exactly once (ignoring the retry settings).
+This advice is no longer provided, since, along with `spring-retry` version 1.2, its functionality is built into the interceptor and message listener containers.
+
+| |For backwards compatibility, a message with a null message ID is considered fatal for the consumer (consumer is stopped) by default (after one retry). To replicate the functionality provided by the `MissingMessageIdAdvice`, you can set the `statefulRetryFatalWithNullMessageId` property to `false` on the listener container. With that setting, the consumer continues to run and the message is rejected (after one retry). It is discarded or routed to the dead letter queue (if one is configured).|
+|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Starting with version 1.3, a builder API is provided to aid in assembling these interceptors by using Java (in `@Configuration` classes).
+The following example shows how to do so:
+
+```
+@Bean
+public StatefulRetryOperationsInterceptor interceptor() {
+ return RetryInterceptorBuilder.stateful()
+ .maxAttempts(5)
+ .backOffOptions(1000, 2.0, 10000) // initialInterval, multiplier, maxInterval
+ .build();
+}
+```
+
+Only a subset of retry capabilities can be configured this way.
+More advanced features would need the configuration of a `RetryTemplate` as a Spring bean.
+See the [Spring Retry Javadoc](https://docs.spring.io/spring-retry/docs/api/current/) for complete information about available policies and their configuration.
+
+##### Retry with Batch Listeners
+
+It is not recommended to configure retry with a batch listener, unless the batch was created by the producer, in a single record.
+See [Batched Messages](#de-batching) for information about consumer and producer-created batches.
+With a consumer-created batch, the framework has no knowledge about which message in the batch caused the failure so recovery after the retries are exhausted is not possible.
+With producer-created batches, since there is only one message that actually failed, the whole message can be recovered.
+Applications may want to inform a custom recoverer where in the batch the failure occurred, perhaps by setting an index property of the thrown exception.
+
+A retry recoverer for a batch listener must implement `MessageBatchRecoverer`.
+
+##### Message Listeners and the Asynchronous Case
+
+If a `MessageListener` fails because of a business exception, the exception is handled by the message listener container, which then goes back to listening for another message.
+If the failure is caused by a dropped connection (not a business exception), the consumer that is collecting messages for the listener has to be cancelled and restarted.
+The `SimpleMessageListenerContainer` handles this seamlessly, and it leaves a log to say that the listener is being restarted.
+In fact, it loops endlessly, trying to restart the consumer.
+Only if the consumer is very badly behaved indeed will it give up.
+One side effect is that if the broker is down when the container starts, it keeps trying until a connection can be established.
+
+Business exception handling, as opposed to protocol errors and dropped connections, might need more thought and some custom configuration, especially if transactions or container acks are in use.
+Prior to 2.8.x, RabbitMQ had no definition of dead letter behavior.
+Consequently, by default, a message that is rejected or rolled back because of a business exception can be redelivered endlessly.
+To put a limit on the client on the number of re-deliveries, one choice is a `StatefulRetryOperationsInterceptor` in the advice chain of the listener.
+The interceptor can have a recovery callback that implements a custom dead letter action — whatever is appropriate for your particular environment.
+
+Another alternative is to set the container’s `defaultRequeueRejected` property to `false`.
+This causes all failed messages to be discarded.
+When using RabbitMQ 2.8.x or higher, this also facilitates delivering the message to a dead letter exchange.
+
+Alternatively, you can throw a `AmqpRejectAndDontRequeueException`.
+Doing so prevents message requeuing, regardless of the setting of the `defaultRequeueRejected` property.
+
+Starting with version 2.1, an `ImmediateRequeueAmqpException` is introduced to perform exactly the opposite logic: the message will be requeued, regardless of the setting of the `defaultRequeueRejected` property.
+
+Often, a combination of both techniques is used.
+You can use a `StatefulRetryOperationsInterceptor` in the advice chain with a `MessageRecoverer` that throws an `AmqpRejectAndDontRequeueException`.
+The `MessageRecover` is called when all retries have been exhausted.
+The `RejectAndDontRequeueRecoverer` does exactly that.
+The default `MessageRecoverer` consumes the errant message and emits a `WARN` message.
+
+Starting with version 1.3, a new `RepublishMessageRecoverer` is provided, to allow publishing of failed messages after retries are exhausted.
+
+When a recoverer consumes the final exception, the message is ack’d and is not sent to the dead letter exchange, if any.
+
+| |When `RepublishMessageRecoverer` is used on the consumer side, the received message has `deliveryMode` in the `receivedDeliveryMode` message property. In this case the `deliveryMode` is `null`. That means a `NON_PERSISTENT` delivery mode on the broker. Starting with version 2.0, you can configure the `RepublishMessageRecoverer` for the `deliveryMode` to set into the message to republish if it is `null`. By default, it uses `MessageProperties` default value - `MessageDeliveryMode.PERSISTENT`.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The following example shows how to set a `RepublishMessageRecoverer` as the recoverer:
+
+```
+@Bean
+RetryOperationsInterceptor interceptor() {
+ return RetryInterceptorBuilder.stateless()
+ .maxAttempts(5)
+ .recoverer(new RepublishMessageRecoverer(amqpTemplate(), "something", "somethingelse"))
+ .build();
+}
+```
+
+The `RepublishMessageRecoverer` publishes the message with additional information in message headers, such as the exception message, stack trace, original exchange, and routing key.
+Additional headers can be added by creating a subclass and overriding `additionalHeaders()`.
+The `deliveryMode` (or any other properties) can also be changed in the `additionalHeaders()`, as the following example shows:
+
+```
+RepublishMessageRecoverer recoverer = new RepublishMessageRecoverer(amqpTemplate, "error") {
+
+ protected Map extends String, ? extends Object> additionalHeaders(Message message, Throwable cause) {
+ message.getMessageProperties()
+ .setDeliveryMode(message.getMessageProperties().getReceivedDeliveryMode());
+ return null;
+ }
+
+};
+```
+
+Starting with version 2.0.5, the stack trace may be truncated if it is too large; this is because all headers have to fit in a single frame.
+By default, if the stack trace would cause less than 20,000 bytes ('headroom') to be available for other headers, it will be truncated.
+This can be adjusted by setting the recoverer’s `frameMaxHeadroom` property, if you need more or less space for other headers.
+Starting with versions 2.1.13, 2.2.3, the exception message is included in this calculation, and the amount of stack trace will be maximized using the following algorithm:
+
+* if the stack trace alone would exceed the limit, the exception message header will be truncated to 97 bytes plus `…` and the stack trace is truncated too.
+
+* if the stack trace is small, the message will be truncated (plus `…`) to fit in the available bytes (but the message within the stack trace itself is truncated to 97 bytes plus `…`).
+
+Whenever a truncation of any kind occurs, the original exception will be logged to retain the complete information.
+
+Starting with version 2.3.3, a new subclass `RepublishMessageRecovererWithConfirms` is provided; this supports both styles of publisher confirms and will wait for the confirmation before returning (or throw an exception if not confirmed or the message is returned).
+
+If the confirm type is `CORRELATED`, the subclass will also detect if a message is returned and throw an `AmqpMessageReturnedException`; if the publication is negatively acknowledged, it will throw an `AmqpNackReceivedException`.
+
+If the confirm type is `SIMPLE`, the subclass will invoke the `waitForConfirmsOrDie` method on the channel.
+
+See [Publisher Confirms and Returns](#cf-pub-conf-ret) for more information about confirms and returns.
+
+Starting with version 2.1, an `ImmediateRequeueMessageRecoverer` is added to throw an `ImmediateRequeueAmqpException`, which notifies a listener container to requeue the current failed message.
+
+##### Exception Classification for Spring Retry
+
+Spring Retry has a great deal of flexibility for determining which exceptions can invoke retry.
+The default configuration retries for all exceptions.
+Given that user exceptions are wrapped in a `ListenerExecutionFailedException`, we need to ensure that the classification examines the exception causes.
+The default classifier looks only at the top level exception.
+
+Since Spring Retry 1.0.3, the `BinaryExceptionClassifier` has a property called `traverseCauses` (default: `false`).
+When `true`, it travers exception causes until it finds a match or there is no cause.
+
+To use this classifier for retry, you can use a `SimpleRetryPolicy` created with the constructor that takes the max attempts, the `Map` of `Exception` instances, and the boolean (`traverseCauses`) and inject this policy into the `RetryTemplate`.
+
+#### Support
+
+Version 2.3 added more convenience when communicating between a single application and multiple brokers or broker clusters.
+The main benefit, on the consumer side, is that the infrastructure can automatically associate auto-declared queues with the appropriate broker.
+
+This is best illustrated with an example:
+
+```
+@SpringBootApplication(exclude = RabbitAutoConfiguration.class)
+public class Application {
+
+ public static void main(String[] args) {
+ SpringApplication.run(Application.class, args);
+ }
+
+ @Bean
+ CachingConnectionFactory cf1() {
+ return new CachingConnectionFactory("localhost");
+ }
+
+ @Bean
+ CachingConnectionFactory cf2() {
+ return new CachingConnectionFactory("otherHost");
+ }
+
+ @Bean
+ CachingConnectionFactory cf3() {
+ return new CachingConnectionFactory("thirdHost");
+ }
+
+ @Bean
+ SimpleRoutingConnectionFactory rcf(CachingConnectionFactory cf1,
+ CachingConnectionFactory cf2, CachingConnectionFactory cf3) {
+
+ SimpleRoutingConnectionFactory rcf = new SimpleRoutingConnectionFactory();
+ rcf.setDefaultTargetConnectionFactory(cf1);
+ rcf.setTargetConnectionFactories(Map.of("one", cf1, "two", cf2, "three", cf3));
+ return rcf;
+ }
+
+ @Bean("factory1-admin")
+ RabbitAdmin admin1(CachingConnectionFactory cf1) {
+ return new RabbitAdmin(cf1);
+ }
+
+ @Bean("factory2-admin")
+ RabbitAdmin admin2(CachingConnectionFactory cf2) {
+ return new RabbitAdmin(cf2);
+ }
+
+ @Bean("factory3-admin")
+ RabbitAdmin admin3(CachingConnectionFactory cf3) {
+ return new RabbitAdmin(cf3);
+ }
+
+ @Bean
+ public RabbitListenerEndpointRegistry rabbitListenerEndpointRegistry() {
+ return new RabbitListenerEndpointRegistry();
+ }
+
+ @Bean
+ public RabbitListenerAnnotationBeanPostProcessor postProcessor(RabbitListenerEndpointRegistry registry) {
+ MultiRabbitListenerAnnotationBeanPostProcessor postProcessor
+ = new MultiRabbitListenerAnnotationBeanPostProcessor();
+ postProcessor.setEndpointRegistry(registry);
+ postProcessor.setContainerFactoryBeanName("defaultContainerFactory");
+ return postProcessor;
+ }
+
+ @Bean
+ public SimpleRabbitListenerContainerFactory factory1(CachingConnectionFactory cf1) {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(cf1);
+ return factory;
+ }
+
+ @Bean
+ public SimpleRabbitListenerContainerFactory factory2(CachingConnectionFactory cf2) {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(cf2);
+ return factory;
+ }
+
+ @Bean
+ public SimpleRabbitListenerContainerFactory factory3(CachingConnectionFactory cf3) {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(cf3);
+ return factory;
+ }
+
+ @Bean
+ RabbitTemplate template(RoutingConnectionFactory rcf) {
+ return new RabbitTemplate(rcf);
+ }
+
+ @Bean
+ ConnectionFactoryContextWrapper wrapper(SimpleRoutingConnectionFactory rcf) {
+ return new ConnectionFactoryContextWrapper(rcf);
+ }
+
+}
+
+@Component
+class Listeners {
+
+ @RabbitListener(queuesToDeclare = @Queue("q1"), containerFactory = "factory1")
+ public void listen1(String in) {
+
+ }
+
+ @RabbitListener(queuesToDeclare = @Queue("q2"), containerFactory = "factory2")
+ public void listen2(String in) {
+
+ }
+
+ @RabbitListener(queuesToDeclare = @Queue("q3"), containerFactory = "factory3")
+ public void listen3(String in) {
+
+ }
+
+}
+```
+
+As you can see, we have declared 3 sets of infrastructure (connection factories, admins, container factories).
+As discussed earlier, `@RabbitListener` can define which container factory to use; in this case, they also use `queuesToDeclare` which causes the queue(s) to be declared on the broker, if it doesn’t exist.
+By naming the `RabbitAdmin` beans with the convention `-admin`, the infrastructure is able to determine which admin should declare the queue.
+This will also work with `bindings = @QueueBinding(…)` whereby the exchange and binding will also be declared.
+It will NOT work with `queues`, since that expects the queue(s) to already exist.
+
+On the producer side, a convenient `ConnectionFactoryContextWrapper` class is provided, to make using the `RoutingConnectionFactory` (see [Routing Connection Factory](#routing-connection-factory)) simpler.
+
+As you can see above, a `SimpleRoutingConnectionFactory` bean has been added with routing keys `one`, `two` and `three`.
+There is also a `RabbitTemplate` that uses that factory.
+Here is an example of using that template with the wrapper to route to one of the broker clusters.
+
+```
+@Bean
+public ApplicationRunner runner(RabbitTemplate template, ConnectionFactoryContextWrapper wrapper) {
+ return args -> {
+ wrapper.run("one", () -> template.convertAndSend("q1", "toCluster1"));
+ wrapper.run("two", () -> template.convertAndSend("q2", "toCluster2"));
+ wrapper.run("three", () -> template.convertAndSend("q3", "toCluster3"));
+ };
+}
+```
+
+#### 4.1.23. Debugging
+
+Spring AMQP provides extensive logging, especially at the `DEBUG` level.
+
+If you wish to monitor the AMQP protocol between the application and broker, you can use a tool such as WireShark, which has a plugin to decode the protocol.
+Alternatively, the RabbitMQ Java client comes with a very useful class called `Tracer`.
+When run as a `main`, by default, it listens on port 5673 and connects to port 5672 on localhost.
+You can run it and change your connection factory configuration to connect to port 5673 on localhost.
+It displays the decoded protocol on the console.
+Refer to the `Tracer` Javadoc for more information.
+
+### 4.2. Using the RabbitMQ Stream Plugin
+
+Version 2.4 introduces initial support for the [RabbitMQ Stream Plugin Java Client](https://github.com/rabbitmq/rabbitmq-stream-java-client) for the [RabbitMQ Stream Plugin](https://rabbitmq.com/stream.html).
+
+* `RabbitStreamTemplate`
+
+* `StreamListenerContainer`
+
+#### 4.2.1. Sending Messages
+
+The `RabbitStreamTemplate` provides a subset of the `RabbitTemplate` (AMQP) functionality.
+
+Example 1. RabbitStreamOperations
+
+```
+public interface RabbitStreamOperations extends AutoCloseable {
+
+ ListenableFuture send(Message message);
+
+ ListenableFuture convertAndSend(Object message);
+
+ ListenableFuture convertAndSend(Object message, @Nullable MessagePostProcessor mpp);
+
+ ListenableFuture send(com.rabbitmq.stream.Message message);
+
+ MessageBuilder messageBuilder();
+
+ MessageConverter messageConverter();
+
+ StreamMessageConverter streamMessageConverter();
+
+ @Override
+ void close() throws AmqpException;
+
+}
+```
+
+The `RabbitStreamTemplate` implementation has the following constructor and properties:
+
+Example 2. RabbitStreamTemplate
+
+```
+public RabbitStreamTemplate(Environment environment, String streamName) {
+}
+
+public void setMessageConverter(MessageConverter messageConverter) {
+}
+
+public void setStreamConverter(StreamMessageConverter streamConverter) {
+}
+
+public synchronized void setProducerCustomizer(ProducerCustomizer producerCustomizer) {
+}
+```
+
+The `MessageConverter` is used in the `convertAndSend` methods to convert the object to a Spring AMQP `Message`.
+
+The `StreamMessageConverter` is used to convert from a Spring AMQP `Message` to a native stream `Message`.
+
+You can also send native stream `Message` s directly; with the `messageBuilder()` method provding access to the `Producer` 's message builder.
+
+The `ProducerCustomizer` provides a mechanism to customize the producer before it is built.
+
+Refer to the [Java Client Documentation](https://rabbitmq.github.io/rabbitmq-stream-java-client/stable/htmlsingle/) about customizing the `Environment` and `Producer`.
+
+#### 4.2.2. Receiving Messages
+
+Asynchronous message reception is provided by the `StreamListenerContainer` (and the `StreamRabbitListenerContainerFactory` when using `@RabbitListener`).
+
+The listener container requires an `Environment` as well as a single stream name.
+
+You can either receive Spring AMQP `Message` s using the classic `MessageListener`, or you can receive native stream `Message` s using a new interface:
+
+```
+public interface StreamMessageListener extends MessageListener {
+
+ void onStreamMessage(Message message, Context context);
+
+}
+```
+
+See [Message Listener Container Configuration](#containerAttributes) for information about supported properties.
+
+Similar the template, the container has a `ConsumerCustomizer` property.
+
+Refer to the [Java Client Documentation](https://rabbitmq.github.io/rabbitmq-stream-java-client/stable/htmlsingle/) about customizing the `Environment` and `Consumer`.
+
+When using `@RabbitListener`, configure a `StreamRabbitListenerContainerFactory`; at this time, most `@RabbitListener` properties (`concurrency`, etc) are ignored. Only `id`, `queues`, `autoStartup` and `containerFactory` are supported.
+In addition, `queues` can only contain one stream name.
+
+#### 4.2.3. Examples
+
+```
+@Bean
+RabbitStreamTemplate streamTemplate(Environment env) {
+ RabbitStreamTemplate template = new RabbitStreamTemplate(env, "test.stream.queue1");
+ template.setProducerCustomizer((name, builder) -> builder.name("test"));
+ return template;
+}
+
+@Bean
+RabbitListenerContainerFactory rabbitListenerContainerFactory(Environment env) {
+ return new StreamRabbitListenerContainerFactory(env);
+}
+
+@RabbitListener(queues = "test.stream.queue1")
+void listen(String in) {
+ ...
+}
+
+@Bean
+RabbitListenerContainerFactory nativeFactory(Environment env) {
+ StreamRabbitListenerContainerFactory factory = new StreamRabbitListenerContainerFactory(env);
+ factory.setNativeListener(true);
+ factory.setConsumerCustomizer((id, builder) -> {
+ builder.name("myConsumer")
+ .offset(OffsetSpecification.first())
+ .manualTrackingStrategy();
+ });
+ return factory;
+}
+
+@RabbitListener(id = "test", queues = "test.stream.queue2", containerFactory = "nativeFactory")
+void nativeMsg(Message in, Context context) {
+ ...
+ context.storeOffset();
+}
+```
+
+### 4.3. Logging Subsystem AMQP Appenders
+
+The framework provides logging appenders for some popular logging subsystems:
+
+* logback (since Spring AMQP version 1.4)
+
+* log4j2 (since Spring AMQP version 1.6)
+
+The appenders are configured by using the normal mechanisms for the logging subsystem, available properties are specified in the following sections.
+
+#### 4.3.1. Common properties
+
+The following properties are available with all appenders:
+
+| Property | Default | Description |
+|-------------------------------------------|-------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| ``` exchangeName ``` | ``` logs ``` | Name of the exchange to which to publish log events. |
+| ``` exchangeType ``` | ``` topic ``` | Type of the exchange to which to publish log events — needed only if the appender declares the exchange. See `declareExchange`. |
+| ``` routingKeyPattern ``` | ``` %c.%p ``` | Logging subsystem pattern format to use to generate a routing key. |
+| ``` applicationId ``` | ``` ``` | Application ID — added to the routing key if the pattern includes `%X{applicationId}`. |
+| ``` senderPoolSize ``` | ``` 2 ``` | The number of threads to use to publish log events. |
+| ``` maxSenderRetries ``` | ``` 30 ``` | How many times to retry sending a message if the broker is unavailable or there is some other error. Retries are delayed as follows: `N ^ log(N)`, where `N` is the retry number. |
+| ``` addresses ``` | ``` ``` | A comma-delimited list of broker addresses in the following form: `host:port[,host:port]*` - overrides `host` and `port`. |
+| ``` host ``` | ``` localhost ``` | RabbitMQ host to which to connect . |
+| ``` port ``` | ``` 5672 ``` | RabbitMQ port to which to connect. |
+| ``` virtualHost ``` | ``` / ``` | RabbitMQ virtual host to which to connect. |
+| ``` username ``` | ``` guest ``` | RabbitMQ user to use when connecting. |
+| ``` password ``` | ``` guest ``` | RabbitMQ password for this user. |
+| ``` useSsl ``` | ``` false ``` | Whether to use SSL for the RabbitMQ connection. See [`RabbitConnectionFactoryBean` and Configuring SSL](#rabbitconnectionfactorybean-configuring-ssl) |
+| ``` verifyHostname ``` | ``` true ``` | Enable server hostname verification for TLS connections. See [`RabbitConnectionFactoryBean` and Configuring SSL](#rabbitconnectionfactorybean-configuring-ssl) |
+| ``` sslAlgorithm ``` | ``` null ``` | The SSL algorithm to use. |
+| ``` sslPropertiesLocation ``` | ``` null ``` | Location of the SSL properties file. |
+| ``` keyStore ``` | ``` null ``` | Location of the keystore. |
+| ``` keyStorePassphrase ``` | ``` null ``` | Passphrase for the keystore. |
+| ``` keyStoreType ``` | ``` JKS ``` | The keystore type. |
+| ``` trustStore ``` | ``` null ``` | Location of the truststore. |
+| ``` trustStorePassphrase ``` | ``` null ``` | Passphrase for the truststore. |
+| ``` trustStoreType ``` | ``` JKS ``` | The truststore type. |
+| ``` saslConfig ``` |``` null (RabbitMQ client default applies) ```| The `saslConfig` - see the javadoc for `RabbitUtils.stringToSaslConfig` for valid values. |
+| ``` contentType ``` | ``` text/plain ``` | `content-type` property of log messages. |
+| ``` contentEncoding ``` | ``` ``` | `content-encoding` property of log messages. |
+| ``` declareExchange ``` | ``` false ``` | Whether or not to declare the configured exchange when this appender starts. See also `durable` and `autoDelete`. |
+| ``` durable ``` | ``` true ``` | When `declareExchange` is `true`, the durable flag is set to this value. |
+| ``` autoDelete ``` | ``` false ``` | When `declareExchange` is `true`, the auto-delete flag is set to this value. |
+| ``` charset ``` | ``` null ``` | Character set to use when converting `String` to `byte[]`. Default: null (the system default charset is used). If the character set is unsupported on the current platform, we fall back to using the system character set. |
+| ``` deliveryMode ``` | ``` PERSISTENT ``` | `PERSISTENT` or `NON_PERSISTENT` to determine whether or not RabbitMQ should persist the messages. |
+| ``` generateId ``` | ``` false ``` | Used to determine whether the `messageId` property is set to a unique value. |
+|``` clientConnectionProperties ```| ``` null ``` | A comma-delimited list of `key:value` pairs for custom client properties to the RabbitMQ connection. |
+| ``` addMdcAsHeaders ``` | ``` true ``` |MDC properties were always added into RabbitMQ message headers until this property was introduced. It can lead to issues for big MDC as while RabbitMQ has limited buffer size for all headers and this buffer is pretty small. This property was introduced to avoid issues in cases of big MDC. By default this value set to `true` for backward compatibility. The `false` turns off serialization MDC into headers. Please note, the `JsonLayout` adds MDC into the message by default.|
+
+#### 4.3.2. Log4j 2 Appender
+
+The following example shows how to configure a Log4j 2 appender:
+
+```
+
+ ...
+
+
+
+```
+
+| |Starting with versions 1.6.10 and 1.7.3, by default, the log4j2 appender publishes the messages to RabbitMQ on the calling thread. This is because Log4j 2 does not, by default, create thread-safe events. If the broker is down, the `maxSenderRetries` is used to retry, with no delay between retries. If you wish to restore the previous behavior of publishing the messages on separate threads (`senderPoolSize`), you can set the `async` property to `true`. However, you also need to configure Log4j 2 to use the `DefaultLogEventFactory` instead of the `ReusableLogEventFactory`. One way to do that is to set the system property `-Dlog4j2.enable.threadlocals=false`. If you use asynchronous publishing with the `ReusableLogEventFactory`, events have a high likelihood of being corrupted due to cross-talk.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### 4.3.3. Logback Appender
+
+The following example shows how to configure a logback appender:
+
+```
+
+
+ %n ]]>
+
+ foo:5672,bar:5672
+ 36
+ false
+ myApplication
+ %property{applicationId}.%c.%p
+ true
+ UTF-8
+ false
+ NON_PERSISTENT
+ true
+ false
+
+```
+
+Starting with version 1.7.1, the Logback `AmqpAppender` provides an `includeCallerData` option, which is `false` by default.
+Extracting caller data can be rather expensive, because the log event has to create a throwable and inspect it to determine the calling location.
+Therefore, by default, caller data associated with an event is not extracted when the event is added to the event queue.
+You can configure the appender to include caller data by setting the `includeCallerData` property to `true`.
+
+Starting with version 2.0.0, the Logback `AmqpAppender` supports [Logback encoders](https://logback.qos.ch/manual/encoders.html) with the `encoder` option.
+The `encoder` and `layout` options are mutually exclusive.
+
+#### 4.3.4. Customizing the Messages
+
+By default, AMQP appenders populate the following message properties:
+
+* `deliveryMode`
+
+* contentType
+
+* `contentEncoding`, if configured
+
+* `messageId`, if `generateId` is configured
+
+* `timestamp` of the log event
+
+* `appId`, if applicationId is configured
+
+In addition they populate headers with the following values:
+
+* `categoryName` of the log event
+
+* The level of the log event
+
+* `thread`: the name of the thread where log event happened
+
+* The location of the stack trace of the log event call
+
+* A copy of all the MDC properties (unless `addMdcAsHeaders` is set to `false`)
+
+Each of the appenders can be subclassed, letting you modify the messages before publishing.
+The following example shows how to customize log messages:
+
+```
+public class MyEnhancedAppender extends AmqpAppender {
+
+ @Override
+ public Message postProcessMessageBeforeSend(Message message, Event event) {
+ message.getMessageProperties().setHeader("foo", "bar");
+ return message;
+ }
+
+}
+```
+
+Starting with 2.2.4, the log4j2 `AmqpAppender` can be extended using `@PluginBuilderFactory` and extending also `AmqpAppender.Builder`
+
+```
+@Plugin(name = "MyEnhancedAppender", category = "Core", elementType = "appender", printObject = true)
+public class MyEnhancedAppender extends AmqpAppender {
+
+ public MyEnhancedAppender(String name, Filter filter, Layout extends Serializable> layout,
+ boolean ignoreExceptions, AmqpManager manager, BlockingQueue eventQueue, String foo, String bar) {
+ super(name, filter, layout, ignoreExceptions, manager, eventQueue);
+
+ @Override
+ public Message postProcessMessageBeforeSend(Message message, Event event) {
+ message.getMessageProperties().setHeader("foo", "bar");
+ return message;
+ }
+
+ @PluginBuilderFactory
+ public static Builder newBuilder() {
+ return new Builder();
+ }
+
+ protected static class Builder extends AmqpAppender.Builder {
+
+ @Override
+ protected AmqpAppender buildInstance(String name, Filter filter, Layout extends Serializable> layout,
+ boolean ignoreExceptions, AmqpManager manager, BlockingQueue eventQueue) {
+ return new MyEnhancedAppender(name, filter, layout, ignoreExceptions, manager, eventQueue);
+ }
+ }
+
+}
+```
+
+#### 4.3.5. Customizing the Client Properties
+
+You can add custom client properties by adding either string properties or more complex properties.
+
+##### Simple String Properties
+
+Each appender supports adding client properties to the RabbitMQ connection.
+
+The following example shows how to add a custom client property for logback:
+
+```
+
+ ...
+ thing1:thing2,cat:hat
+ ...
+
+```
+
+Example 3. log4j2
+
+```
+
+ ...
+
+
+```
+
+The properties are a comma-delimited list of `key:value` pairs.
+Keys and values cannot contain commas or colons.
+
+These properties appear on the RabbitMQ Admin UI when the connection is viewed.
+
+##### Advanced Technique for Logback
+
+You can subclass the Logback appender.
+Doing so lets you modify the client connection properties before the connection is established.
+The following example shows how to do so:
+
+```
+public class MyEnhancedAppender extends AmqpAppender {
+
+ private String thing1;
+
+ @Override
+ protected void updateConnectionClientProperties(Map clientProperties) {
+ clientProperties.put("thing1", this.thing1);
+ }
+
+ public void setThing1(String thing1) {
+ this.thing1 = thing1;
+ }
+
+}
+```
+
+Then you can add `thing2 ` to logback.xml.
+
+For String properties such as those shown in the preceding example, the previous technique can be used.
+Subclasses allow for adding richer properties (such as adding a `Map` or numeric property).
+
+#### 4.3.6. Providing a Custom Queue Implementation
+
+The `AmqpAppenders` use a `BlockingQueue` to asynchronously publish logging events to RabbitMQ.
+By default, a `LinkedBlockingQueue` is used.
+However, you can supply any kind of custom `BlockingQueue` implementation.
+
+The following example shows how to do so for Logback:
+
+```
+public class MyEnhancedAppender extends AmqpAppender {
+
+ @Override
+ protected BlockingQueue createEventQueue() {
+ return new ArrayBlockingQueue();
+ }
+
+}
+```
+
+The Log4j 2 appender supports using a [`BlockingQueueFactory`](https://logging.apache.org/log4j/2.x/manual/appenders.html#BlockingQueueFactory), as the following example shows:
+
+```
+
+ ...
+
+
+
+
+```
+
+### 4.4. Sample Applications
+
+The [Spring AMQP Samples](https://github.com/SpringSource/spring-amqp-samples) project includes two sample applications.
+The first is a simple “Hello World” example that demonstrates both synchronous and asynchronous message reception.
+It provides an excellent starting point for acquiring an understanding of the essential components.
+The second sample is based on a stock-trading use case to demonstrate the types of interaction that would be common in real world applications.
+In this chapter, we provide a quick walk-through of each sample so that you can focus on the most important components.
+The samples are both Maven-based, so you should be able to import them directly into any Maven-aware IDE (such as [SpringSource Tool Suite](https://www.springsource.org/sts)).
+
+#### 4.4.1. The “Hello World” Sample
+
+The “Hello World” sample demonstrates both synchronous and asynchronous message reception.
+You can import the `spring-rabbit-helloworld` sample into the IDE and then follow the discussion below.
+
+##### Synchronous Example
+
+Within the `src/main/java` directory, navigate to the `org.springframework.amqp.helloworld` package.
+Open the `HelloWorldConfiguration` class and notice that it contains the `@Configuration` annotation at the class level and notice some `@Bean` annotations at method-level.
+This is an example of Spring’s Java-based configuration.
+You can read more about that [here](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/beans.html#beans-java).
+
+The following listing shows how the connection factory is created:
+
+```
+@Bean
+public CachingConnectionFactory connectionFactory() {
+ CachingConnectionFactory connectionFactory =
+ new CachingConnectionFactory("localhost");
+ connectionFactory.setUsername("guest");
+ connectionFactory.setPassword("guest");
+ return connectionFactory;
+}
+```
+
+The configuration also contains an instance of `RabbitAdmin`, which, by default, looks for any beans of type exchange, queue, or binding and then declares them on the broker.
+In fact, the `helloWorldQueue` bean that is generated in `HelloWorldConfiguration` is an example because it is an instance of `Queue`.
+
+The following listing shows the `helloWorldQueue` bean definition:
+
+```
+@Bean
+public Queue helloWorldQueue() {
+ return new Queue(this.helloWorldQueueName);
+}
+```
+
+Looking back at the `rabbitTemplate` bean configuration, you can see that it has the name of `helloWorldQueue` set as its `queue` property (for receiving messages) and for its `routingKey` property (for sending messages).
+
+Now that we have explored the configuration, we can look at the code that actually uses these components.
+First, open the `Producer` class from within the same package.
+It contains a `main()` method where the Spring `ApplicationContext` is created.
+
+The following listing shows the `main` method:
+
+```
+public static void main(String[] args) {
+ ApplicationContext context =
+ new AnnotationConfigApplicationContext(RabbitConfiguration.class);
+ AmqpTemplate amqpTemplate = context.getBean(AmqpTemplate.class);
+ amqpTemplate.convertAndSend("Hello World");
+ System.out.println("Sent: Hello World");
+}
+```
+
+In the preceding example, the `AmqpTemplate` bean is retrieved and used for sending a `Message`.
+Since the client code should rely on interfaces whenever possible, the type is `AmqpTemplate` rather than `RabbitTemplate`.
+Even though the bean created in `HelloWorldConfiguration` is an instance of `RabbitTemplate`, relying on the interface means that this code is more portable (you can change the configuration independently of the code).
+Since the `convertAndSend()` method is invoked, the template delegates to its `MessageConverter` instance.
+In this case, it uses the default `SimpleMessageConverter`, but a different implementation could be provided to the `rabbitTemplate` bean, as defined in `HelloWorldConfiguration`.
+
+Now open the `Consumer` class.
+It actually shares the same configuration base class, which means it shares the `rabbitTemplate` bean.
+That is why we configured that template with both a `routingKey` (for sending) and a `queue` (for receiving).
+As we describe in [`AmqpTemplate`](#amqp-template), you could instead pass the 'routingKey' argument to the send method and the 'queue' argument to the receive method.
+The `Consumer` code is basically a mirror image of the Producer, calling `receiveAndConvert()` rather than `convertAndSend()`.
+
+The following listing shows the main method for the `Consumer`:
+
+```
+public static void main(String[] args) {
+ ApplicationContext context =
+ new AnnotationConfigApplicationContext(RabbitConfiguration.class);
+ AmqpTemplate amqpTemplate = context.getBean(AmqpTemplate.class);
+ System.out.println("Received: " + amqpTemplate.receiveAndConvert());
+}
+```
+
+If you run the `Producer` and then run the `Consumer`, you should see `Received: Hello World` in the console output.
+
+##### Asynchronous Example
+
+[Synchronous Example](#hello-world-sync) walked through the synchronous Hello World sample.
+This section describes a slightly more advanced but significantly more powerful option.
+With a few modifications, the Hello World sample can provide an example of asynchronous reception, also known as message-driven POJOs.
+In fact, there is a sub-package that provides exactly that: `org.springframework.amqp.samples.helloworld.async`.
+
+Again, we start with the sending side.
+Open the `ProducerConfiguration` class and notice that it creates a `connectionFactory` and a `rabbitTemplate` bean.
+This time, since the configuration is dedicated to the message sending side, we do not even need any queue definitions, and the `RabbitTemplate` has only the 'routingKey' property set.
+Recall that messages are sent to an exchange rather than being sent directly to a queue.
+The AMQP default exchange is a direct exchange with no name.
+All queues are bound to that default exchange with their name as the routing key.
+That is why we only need to provide the routing key here.
+
+The following listing shows the `rabbitTemplate` definition:
+
+```
+public RabbitTemplate rabbitTemplate() {
+ RabbitTemplate template = new RabbitTemplate(connectionFactory());
+ template.setRoutingKey(this.helloWorldQueueName);
+ return template;
+}
+```
+
+Since this sample demonstrates asynchronous message reception, the producing side is designed to continuously send messages (if it were a message-per-execution model like the synchronous version, it would not be quite so obvious that it is, in fact, a message-driven consumer).
+The component responsible for continuously sending messages is defined as an inner class within the `ProducerConfiguration`.
+It is configured to run every three seconds.
+
+The following listing shows the component:
+
+```
+static class ScheduledProducer {
+
+ @Autowired
+ private volatile RabbitTemplate rabbitTemplate;
+
+ private final AtomicInteger counter = new AtomicInteger();
+
+ @Scheduled(fixedRate = 3000)
+ public void sendMessage() {
+ rabbitTemplate.convertAndSend("Hello World " + counter.incrementAndGet());
+ }
+}
+```
+
+You do not need to understand all of the details, since the real focus should be on the receiving side (which we cover next).
+However, if you are not yet familiar with Spring task scheduling support, you can learn more [here](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/scheduling.html#scheduling-annotation-support).
+The short story is that the `postProcessor` bean in the `ProducerConfiguration` registers the task with a scheduler.
+
+Now we can turn to the receiving side.
+To emphasize the message-driven POJO behavior, we start with the component that react to the messages.
+The class is called `HelloWorldHandler` and is shown in the following listing:
+
+```
+public class HelloWorldHandler {
+
+ public void handleMessage(String text) {
+ System.out.println("Received: " + text);
+ }
+
+}
+```
+
+That class is a POJO.
+It does not extend any base class, it does not implement any interfaces, and it does not even contain any imports.
+It is being “adapted” to the `MessageListener` interface by the Spring AMQP `MessageListenerAdapter`.
+You can then configure that adapter on a `SimpleMessageListenerContainer`.
+For this sample, the container is created in the `ConsumerConfiguration` class.
+You can see the POJO wrapped in the adapter there.
+
+The following listing shows how the `listenerContainer` is defined:
+
+```
+@Bean
+public SimpleMessageListenerContainer listenerContainer() {
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
+ container.setConnectionFactory(connectionFactory());
+ container.setQueueName(this.helloWorldQueueName);
+ container.setMessageListener(new MessageListenerAdapter(new HelloWorldHandler()));
+ return container;
+}
+```
+
+The `SimpleMessageListenerContainer` is a Spring lifecycle component and, by default, starts automatically.
+If you look in the `Consumer` class, you can see that its `main()` method consists of nothing more than a one-line bootstrap to create the `ApplicationContext`.
+The Producer’s `main()` method is also a one-line bootstrap, since the component whose method is annotated with `@Scheduled` also starts automatically.
+You can start the `Producer` and `Consumer` in any order, and you should see messages being sent and received every three seconds.
+
+#### 4.4.2. Stock Trading
+
+The Stock Trading sample demonstrates more advanced messaging scenarios than [the Hello World sample](#hello-world-sample).
+However, the configuration is very similar, if a bit more involved.
+Since we walked through the Hello World configuration in detail, here, we focus on what makes this sample different.
+There is a server that pushes market data (stock quotations) to a topic exchange.
+Then, clients can subscribe to the market data feed by binding a queue with a routing pattern (for example,`app.stock.quotes.nasdaq.*`).
+The other main feature of this demo is a request-reply “stock trade” interaction that is initiated by the client and handled by the server.
+That involves a private `replyTo` queue that is sent by the client within the order request message itself.
+
+The server’s core configuration is in the `RabbitServerConfiguration` class within the `org.springframework.amqp.rabbit.stocks.config.server` package.
+It extends the `AbstractStockAppRabbitConfiguration`.
+That is where the resources common to the server and client are defined, including the market data topic exchange (whose name is 'app.stock.marketdata') and the queue that the server exposes for stock trades (whose name is 'app.stock.request').
+In that common configuration file, you also see that a `Jackson2JsonMessageConverter` is configured on the `RabbitTemplate`.
+
+The server-specific configuration consists of two things.
+First, it configures the market data exchange on the `RabbitTemplate` so that it does not need to provide that exchange name with every call to send a `Message`.
+It does this within an abstract callback method defined in the base configuration class.
+The following listing shows that method:
+
+```
+public void configureRabbitTemplate(RabbitTemplate rabbitTemplate) {
+ rabbitTemplate.setExchange(MARKET_DATA_EXCHANGE_NAME);
+}
+```
+
+Second, the stock request queue is declared.
+It does not require any explicit bindings in this case, because it is bound to the default no-name exchange with its own name as the routing key.
+As mentioned earlier, the AMQP specification defines that behavior.
+The following listing shows the definition of the `stockRequestQueue` bean:
+
+```
+@Bean
+public Queue stockRequestQueue() {
+ return new Queue(STOCK_REQUEST_QUEUE_NAME);
+}
+```
+
+Now that you have seen the configuration of the server’s AMQP resources, navigate to the `org.springframework.amqp.rabbit.stocks` package under the `src/test/java` directory.
+There, you can see the actual `Server` class that provides a `main()` method.
+It creates an `ApplicationContext` based on the `server-bootstrap.xml` config file.
+There, you can see the scheduled task that publishes dummy market data.
+That configuration relies upon Spring’s `task` namespace support.
+The bootstrap config file also imports a few other files.
+The most interesting one is `server-messaging.xml`, which is directly under `src/main/resources`.
+There, you can see the `messageListenerContainer` bean that is responsible for handling the stock trade requests.
+Finally, have a look at the `serverHandler` bean that is defined in `server-handlers.xml` (which is also in 'src/main/resources').
+That bean is an instance of the `ServerHandler` class and is a good example of a message-driven POJO that can also send reply messages.
+Notice that it is not itself coupled to the framework or any of the AMQP concepts.
+It accepts a `TradeRequest` and returns a `TradeResponse`.
+The following listing shows the definition of the `handleMessage` method:
+
+```
+public TradeResponse handleMessage(TradeRequest tradeRequest) { ...
+}
+```
+
+Now that we have seen the most important configuration and code for the server, we can turn to the client.
+The best starting point is probably `RabbitClientConfiguration`, in the `org.springframework.amqp.rabbit.stocks.config.client` package.
+Notice that it declares two queues without providing explicit names.
+The following listing shows the bean definitions for the two queues:
+
+```
+@Bean
+public Queue marketDataQueue() {
+ return amqpAdmin().declareQueue();
+}
+
+@Bean
+public Queue traderJoeQueue() {
+ return amqpAdmin().declareQueue();
+}
+```
+
+Those are private queues, and unique names are generated automatically.
+The first generated queue is used by the client to bind to the market data exchange that has been exposed by the server.
+Recall that, in AMQP, consumers interact with queues while producers interact with exchanges.
+The “binding” of queues to exchanges is what tells the broker to deliver (or route) messages from a given exchange to a queue.
+Since the market data exchange is a topic exchange, the binding can be expressed with a routing pattern.
+The `RabbitClientConfiguration` does so with a `Binding` object, and that object is generated with the `BindingBuilder` fluent API.
+The following listing shows the `Binding`:
+
+```
+@Value("${stocks.quote.pattern}")
+private String marketDataRoutingKey;
+
+@Bean
+public Binding marketDataBinding() {
+ return BindingBuilder.bind(
+ marketDataQueue()).to(marketDataExchange()).with(marketDataRoutingKey);
+}
+```
+
+Notice that the actual value has been externalized in a properties file (`client.properties` under `src/main/resources`), and that we use Spring’s `@Value` annotation to inject that value.
+This is generally a good idea.
+Otherwise, the value would have been hardcoded in a class and unmodifiable without recompilation.
+In this case, it is much easier to run multiple versions of the client while making changes to the routing pattern used for binding.
+We can try that now.
+
+Start by running `org.springframework.amqp.rabbit.stocks.Server` and then `org.springframework.amqp.rabbit.stocks.Client`.
+You should see dummy quotations for `NASDAQ` stocks, because the current value associated with the 'stocks.quote.pattern' key in client.properties is 'app.stock.quotes.nasdaq.**'.
+Now, while keeping the existing `Server` and `Client` running, change that property value to 'app.stock.quotes.nyse.**' and start a second `Client` instance.
+You should see that the first client still receives NASDAQ quotes while the second client receives NYSE quotes.
+You could instead change the pattern to get all stocks or even an individual ticker.
+
+The final feature we explore is the request-reply interaction from the client’s perspective.
+Recall that we have already seen the `ServerHandler` that accepts `TradeRequest` objects and returns `TradeResponse` objects.
+The corresponding code on the `Client` side is `RabbitStockServiceGateway` in the `org.springframework.amqp.rabbit.stocks.gateway` package.
+It delegates to the `RabbitTemplate` in order to send messages.
+The following listing shows the `send` method:
+
+```
+public void send(TradeRequest tradeRequest) {
+ getRabbitTemplate().convertAndSend(tradeRequest, new MessagePostProcessor() {
+ public Message postProcessMessage(Message message) throws AmqpException {
+ message.getMessageProperties().setReplyTo(new Address(defaultReplyToQueue));
+ try {
+ message.getMessageProperties().setCorrelationId(
+ UUID.randomUUID().toString().getBytes("UTF-8"));
+ }
+ catch (UnsupportedEncodingException e) {
+ throw new AmqpException(e);
+ }
+ return message;
+ }
+ });
+}
+```
+
+Notice that, prior to sending the message, it sets the `replyTo` address.
+It provides the queue that was generated by the `traderJoeQueue` bean definition (shown earlier).
+The following listing shows the `@Bean` definition for the `StockServiceGateway` class itself:
+
+```
+@Bean
+public StockServiceGateway stockServiceGateway() {
+ RabbitStockServiceGateway gateway = new RabbitStockServiceGateway();
+ gateway.setRabbitTemplate(rabbitTemplate());
+ gateway.setDefaultReplyToQueue(traderJoeQueue());
+ return gateway;
+}
+```
+
+If you are no longer running the server and client, start them now.
+Try sending a request with the format of '100 TCKR'.
+After a brief artificial delay that simulates “processing” of the request, you should see a confirmation message appear on the client.
+
+#### 4.4.3. Receiving JSON from Non-Spring Applications
+
+Spring applications, when sending JSON, set the `*TypeId*` header to the fully qualified class name to assist the receiving application in converting the JSON back to a Java object.
+
+The `spring-rabbit-json` sample explores several techniques to convert the JSON from a non-Spring application.
+
+See also [Jackson2JsonMessageConverter](#json-message-converter) as well as the [Javadoc for the `DefaultClassMapper`](https://docs.spring.io/spring-amqp/docs/current/api/index.html?org/springframework/amqp/support/converter/DefaultClassMapper.html).
+
+### 4.5. Testing Support
+
+Writing integration for asynchronous applications is necessarily more complex than testing simpler applications.
+This is made more complex when abstractions such as the `@RabbitListener` annotations come into the picture.
+The question is how to verify that, after sending a message, the listener received the message as expected.
+
+The framework itself has many unit and integration tests.
+Some using mocks while, others use integration testing with a live RabbitMQ broker.
+You can consult those tests for some ideas for testing scenarios.
+
+Spring AMQP version 1.6 introduced the `spring-rabbit-test` jar, which provides support for testing some of these more complex scenarios.
+It is anticipated that this project will expand over time, but we need community feedback to make suggestions for the features needed to help with testing.
+Please use [JIRA](https://jira.spring.io/browse/AMQP) or [GitHub Issues](https://github.com/spring-projects/spring-amqp/issues) to provide such feedback.
+
+#### 4.5.1. @SpringRabbitTest
+
+Use this annotation to add infrastructure beans to the Spring test `ApplicationContext`.
+This is not necessary when using, for example `@SpringBootTest` since Spring Boot’s auto configuration will add the beans.
+
+Beans that are registered are:
+
+* `CachingConnectionFactory` (`autoConnectionFactory`). If `@RabbitEnabled` is present, its connectionn factory is used.
+
+* `RabbitTemplate` (`autoRabbitTemplate`)
+
+* `RabbitAdmin` (`autoRabbitAdmin`)
+
+* `RabbitListenerContainerFactory` (`autoContainerFactory`)
+
+In addition, the beans associated with `@EnableRabbit` (to support `@RabbitListener`) are added.
+
+Example 4. Junit5 example
+
+```
+@SpringJunitConfig
+@SpringRabbitTest
+public class MyRabbitTests {
+
+ @Autowired
+ private RabbitTemplate template;
+
+ @Autowired
+ private RabbitAdmin admin;
+
+ @Autowired
+ private RabbitListenerEndpointRegistry registry;
+
+ @Test
+ void test() {
+ ...
+ }
+
+ @Configuration
+ public static class Config {
+
+ ...
+
+ }
+
+}
+```
+
+With JUnit4, replace `@SpringJunitConfig` with `@RunWith(SpringRunnner.class)`.
+
+#### 4.5.2. Mockito `Answer>` Implementations
+
+There are currently two `Answer>` implementations to help with testing.
+
+The first, `LatchCountDownAndCallRealMethodAnswer`, provides an `Answer` that returns `null` and counts down a latch.
+The following example shows how to use `LatchCountDownAndCallRealMethodAnswer`:
+
+```
+LatchCountDownAndCallRealMethodAnswer answer = this.harness.getLatchAnswerFor("myListener", 2);
+doAnswer(answer)
+ .when(listener).foo(anyString(), anyString());
+
+...
+
+assertThat(answer.await(10)).isTrue();
+```
+
+The second, `LambdaAnswer` provides a mechanism to optionally call the real method and provides an opportunity
+to return a custom result, based on the `InvocationOnMock` and the result (if any).
+
+Consider the following POJO:
+
+```
+public class Thing {
+
+ public String thing(String thing) {
+ return thing.toUpperCase();
+ }
+
+}
+```
+
+The following class tests the `Thing` POJO:
+
+```
+Thing thing = spy(new Thing());
+
+doAnswer(new LambdaAnswer(true, (i, r) -> r + r))
+ .when(thing).thing(anyString());
+assertEquals("THINGTHING", thing.thing("thing"));
+
+doAnswer(new LambdaAnswer(true, (i, r) -> r + i.getArguments()[0]))
+ .when(thing).thing(anyString());
+assertEquals("THINGthing", thing.thing("thing"));
+
+doAnswer(new LambdaAnswer(false, (i, r) ->
+ "" + i.getArguments()[0] + i.getArguments()[0])).when(thing).thing(anyString());
+assertEquals("thingthing", thing.thing("thing"));
+```
+
+Starting with version 2.2.3, the answers capture any exceptions thrown by the method under test.
+Use `answer.getExceptions()` to get a reference to them.
+
+When used in conjunction with the [`@RabbitListenerTest` and `RabbitListenerTestHarness`](#test-harness) use `harness.getLambdaAnswerFor("listenerId", true, …)` to get a properly constructed answer for the listener.
+
+#### 4.5.3. `@RabbitListenerTest` and `RabbitListenerTestHarness`
+
+Annotating one of your `@Configuration` classes with `@RabbitListenerTest` causes the framework to replace the
+standard `RabbitListenerAnnotationBeanPostProcessor` with a subclass called `RabbitListenerTestHarness` (it also enables`@RabbitListener` detection through `@EnableRabbit`).
+
+The `RabbitListenerTestHarness` enhances the listener in two ways.
+First, it wraps the listener in a `Mockito Spy`, enabling normal `Mockito` stubbing and verification operations.
+It can also add an `Advice` to the listener, enabling access to the arguments, result, and any exceptions that are thrown.
+You can control which (or both) of these are enabled with attributes on the `@RabbitListenerTest`.
+The latter is provided for access to lower-level data about the invocation.
+It also supports blocking the test thread until the async listener is called.
+
+| |`final` `@RabbitListener` methods cannot be spied or advised. Also, only listeners with an `id` attribute can be spied or advised.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------|
+
+Consider some examples.
+
+The following example uses spy:
+
+```
+@Configuration
+@RabbitListenerTest
+public class Config {
+
+ @Bean
+ public Listener listener() {
+ return new Listener();
+ }
+
+ ...
+
+}
+
+public class Listener {
+
+ @RabbitListener(id="foo", queues="#{queue1.name}")
+ public String foo(String foo) {
+ return foo.toUpperCase();
+ }
+
+ @RabbitListener(id="bar", queues="#{queue2.name}")
+ public void foo(@Payload String foo, @Header("amqp_receivedRoutingKey") String rk) {
+ ...
+ }
+
+}
+
+public class MyTests {
+
+ @Autowired
+ private RabbitListenerTestHarness harness; (1)
+
+ @Test
+ public void testTwoWay() throws Exception {
+ assertEquals("FOO", this.rabbitTemplate.convertSendAndReceive(this.queue1.getName(), "foo"));
+
+ Listener listener = this.harness.getSpy("foo"); (2)
+ assertNotNull(listener);
+ verify(listener).foo("foo");
+ }
+
+ @Test
+ public void testOneWay() throws Exception {
+ Listener listener = this.harness.getSpy("bar");
+ assertNotNull(listener);
+
+ LatchCountDownAndCallRealMethodAnswer answer = this.harness.getLatchAnswerFor("bar", 2); (3)
+ doAnswer(answer).when(listener).foo(anyString(), anyString()); (4)
+
+ this.rabbitTemplate.convertAndSend(this.queue2.getName(), "bar");
+ this.rabbitTemplate.convertAndSend(this.queue2.getName(), "baz");
+
+ assertTrue(answer.await(10));
+ verify(listener).foo("bar", this.queue2.getName());
+ verify(listener).foo("baz", this.queue2.getName());
+ }
+
+}
+```
+
+|**1**| Inject the harness into the test case so we can get access to the spy. |
+|-----|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+|**2**| Get a reference to the spy so we can verify it was invoked as expected. Since this is a send and receive operation, there is no need to suspend the test thread because it was already suspended in the `RabbitTemplate` waiting for the reply. |
+|**3**|In this case, we’re only using a send operation so we need a latch to wait for the asynchronous call to the listener on the container thread. We use one of the [Answer\\>](#mockito-answer) implementations to help with that. IMPORTANT: Due to the way the listener is spied, it is important to use `harness.getLatchAnswerFor()` to get a properly configured answer for the spy.|
+|**4**| Configure the spy to invoke the `Answer`. |
+
+The following example uses the capture advice:
+
+```
+@Configuration
+@ComponentScan
+@RabbitListenerTest(spy = false, capture = true)
+public class Config {
+
+}
+
+@Service
+public class Listener {
+
+ private boolean failed;
+
+ @RabbitListener(id="foo", queues="#{queue1.name}")
+ public String foo(String foo) {
+ return foo.toUpperCase();
+ }
+
+ @RabbitListener(id="bar", queues="#{queue2.name}")
+ public void foo(@Payload String foo, @Header("amqp_receivedRoutingKey") String rk) {
+ if (!failed && foo.equals("ex")) {
+ failed = true;
+ throw new RuntimeException(foo);
+ }
+ failed = false;
+ }
+
+}
+
+public class MyTests {
+
+ @Autowired
+ private RabbitListenerTestHarness harness; (1)
+
+ @Test
+ public void testTwoWay() throws Exception {
+ assertEquals("FOO", this.rabbitTemplate.convertSendAndReceive(this.queue1.getName(), "foo"));
+
+ InvocationData invocationData =
+ this.harness.getNextInvocationDataFor("foo", 0, TimeUnit.SECONDS); (2)
+ assertThat(invocationData.getArguments()[0], equalTo("foo")); (3)
+ assertThat((String) invocationData.getResult(), equalTo("FOO"));
+ }
+
+ @Test
+ public void testOneWay() throws Exception {
+ this.rabbitTemplate.convertAndSend(this.queue2.getName(), "bar");
+ this.rabbitTemplate.convertAndSend(this.queue2.getName(), "baz");
+ this.rabbitTemplate.convertAndSend(this.queue2.getName(), "ex");
+
+ InvocationData invocationData =
+ this.harness.getNextInvocationDataFor("bar", 10, TimeUnit.SECONDS); (4)
+ Object[] args = invocationData.getArguments();
+ assertThat((String) args[0], equalTo("bar"));
+ assertThat((String) args[1], equalTo(queue2.getName()));
+
+ invocationData = this.harness.getNextInvocationDataFor("bar", 10, TimeUnit.SECONDS);
+ args = invocationData.getArguments();
+ assertThat((String) args[0], equalTo("baz"));
+
+ invocationData = this.harness.getNextInvocationDataFor("bar", 10, TimeUnit.SECONDS);
+ args = invocationData.getArguments();
+ assertThat((String) args[0], equalTo("ex"));
+ assertEquals("ex", invocationData.getThrowable().getMessage()); (5)
+ }
+
+}
+```
+
+|**1**| Inject the harness into the test case so we can get access to the spy. |
+|-----|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+|**2**|Use `harness.getNextInvocationDataFor()` to retrieve the invocation data - in this case since it was a request/reply scenario there is no need to wait for any time because the test thread was suspended in the `RabbitTemplate` waiting for the result.|
+|**3**| We can then verify that the argument and result was as expected. |
+|**4**| This time we need some time to wait for the data, since it’s an async operation on the container thread and we need to suspend the test thread. |
+|**5**| When the listener throws an exception, it is available in the `throwable` property of the invocation data. |
+
+| |When using custom `Answer>` s with the harness, in order to operate properly, such answers should subclass `ForwardsInvocation` and get the actual listener (not the spy) from the harness (`getDelegate("myListener")`) and call `super.answer(invocation)`. See the provided [Mockito `Answer>` Implementations](#mockito-answer) source code for examples.|
+|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### 4.5.4. Using `TestRabbitTemplate`
+
+The `TestRabbitTemplate` is provided to perform some basic integration testing without the need for a broker.
+When you add it as a `@Bean` in your test case, it discovers all the listener containers in the context, whether declared as `@Bean` or ` ` or using the `@RabbitListener` annotation.
+It currently only supports routing by queue name.
+The template extracts the message listener from the container and invokes it directly on the test thread.
+Request-reply messaging (`sendAndReceive` methods) is supported for listeners that return replies.
+
+The following test case uses the template:
+
+```
+@RunWith(SpringRunner.class)
+public class TestRabbitTemplateTests {
+
+ @Autowired
+ private TestRabbitTemplate template;
+
+ @Autowired
+ private Config config;
+
+ @Test
+ public void testSimpleSends() {
+ this.template.convertAndSend("foo", "hello1");
+ assertThat(this.config.fooIn, equalTo("foo:hello1"));
+ this.template.convertAndSend("bar", "hello2");
+ assertThat(this.config.barIn, equalTo("bar:hello2"));
+ assertThat(this.config.smlc1In, equalTo("smlc1:"));
+ this.template.convertAndSend("foo", "hello3");
+ assertThat(this.config.fooIn, equalTo("foo:hello1"));
+ this.template.convertAndSend("bar", "hello4");
+ assertThat(this.config.barIn, equalTo("bar:hello2"));
+ assertThat(this.config.smlc1In, equalTo("smlc1:hello3hello4"));
+
+ this.template.setBroadcast(true);
+ this.template.convertAndSend("foo", "hello5");
+ assertThat(this.config.fooIn, equalTo("foo:hello1foo:hello5"));
+ this.template.convertAndSend("bar", "hello6");
+ assertThat(this.config.barIn, equalTo("bar:hello2bar:hello6"));
+ assertThat(this.config.smlc1In, equalTo("smlc1:hello3hello4hello5hello6"));
+ }
+
+ @Test
+ public void testSendAndReceive() {
+ assertThat(this.template.convertSendAndReceive("baz", "hello"), equalTo("baz:hello"));
+ }
+```
+
+```
+ @Configuration
+ @EnableRabbit
+ public static class Config {
+
+ public String fooIn = "";
+
+ public String barIn = "";
+
+ public String smlc1In = "smlc1:";
+
+ @Bean
+ public TestRabbitTemplate template() throws IOException {
+ return new TestRabbitTemplate(connectionFactory());
+ }
+
+ @Bean
+ public ConnectionFactory connectionFactory() throws IOException {
+ ConnectionFactory factory = mock(ConnectionFactory.class);
+ Connection connection = mock(Connection.class);
+ Channel channel = mock(Channel.class);
+ willReturn(connection).given(factory).createConnection();
+ willReturn(channel).given(connection).createChannel(anyBoolean());
+ given(channel.isOpen()).willReturn(true);
+ return factory;
+ }
+
+ @Bean
+ public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() throws IOException {
+ SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
+ factory.setConnectionFactory(connectionFactory());
+ return factory;
+ }
+
+ @RabbitListener(queues = "foo")
+ public void foo(String in) {
+ this.fooIn += "foo:" + in;
+ }
+
+ @RabbitListener(queues = "bar")
+ public void bar(String in) {
+ this.barIn += "bar:" + in;
+ }
+
+ @RabbitListener(queues = "baz")
+ public String baz(String in) {
+ return "baz:" + in;
+ }
+
+ @Bean
+ public SimpleMessageListenerContainer smlc1() throws IOException {
+ SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
+ container.setQueueNames("foo", "bar");
+ container.setMessageListener(new MessageListenerAdapter(new Object() {
+
+ @SuppressWarnings("unused")
+ public void handleMessage(String in) {
+ smlc1In += in;
+ }
+
+ }));
+ return container;
+ }
+
+ }
+
+}
+```
+
+#### 4.5.5. JUnit4 `@Rules`
+
+Spring AMQP version 1.7 and later provide an additional jar called `spring-rabbit-junit`.
+This jar contains a couple of utility `@Rule` instances for use when running JUnit4 tests.
+See [JUnit5 Conditions](#junit5-conditions) for JUnit5 testing.
+
+##### Using `BrokerRunning`
+
+`BrokerRunning` provides a mechanism to let tests succeed when a broker is not running (on `localhost`, by default).
+
+It also has utility methods to initialize and empty queues and delete queues and exchanges.
+
+The following example shows its usage:
+
+```
+@ClassRule
+public static BrokerRunning brokerRunning = BrokerRunning.isRunningWithEmptyQueues("foo", "bar");
+
+@AfterClass
+public static void tearDown() {
+ brokerRunning.removeTestQueues("some.other.queue.too") // removes foo, bar as well
+}
+```
+
+There are several `isRunning…` static methods, such as `isBrokerAndManagementRunning()`, which verifies the broker has the management plugin enabled.
+
+###### Configuring the Rule
+
+There are times when you want tests to fail if there is no broker, such as a nightly CI build.
+To disable the rule at runtime, set an environment variable called `RABBITMQ_SERVER_REQUIRED` to `true`.
+
+You can override the broker properties, such as hostname with either setters or environment variables:
+
+The following example shows how to override properties with setters:
+
+```
+@ClassRule
+public static BrokerRunning brokerRunning = BrokerRunning.isRunningWithEmptyQueues("foo", "bar");
+
+static {
+ brokerRunning.setHostName("10.0.0.1")
+}
+
+@AfterClass
+public static void tearDown() {
+ brokerRunning.removeTestQueues("some.other.queue.too") // removes foo, bar as well
+}
+```
+
+You can also override properties by setting the following environment variables:
+
+```
+public static final String BROKER_ADMIN_URI = "RABBITMQ_TEST_ADMIN_URI";
+public static final String BROKER_HOSTNAME = "RABBITMQ_TEST_HOSTNAME";
+public static final String BROKER_PORT = "RABBITMQ_TEST_PORT";
+public static final String BROKER_USER = "RABBITMQ_TEST_USER";
+public static final String BROKER_PW = "RABBITMQ_TEST_PASSWORD";
+public static final String BROKER_ADMIN_USER = "RABBITMQ_TEST_ADMIN_USER";
+public static final String BROKER_ADMIN_PW = "RABBITMQ_TEST_ADMIN_PASSWORD";
+```
+
+These environment variables override the default settings (`localhost:5672` for amqp and `[localhost:15672/api/](http://localhost:15672/api/)` for the management REST API).
+
+Changing the host name affects both the `amqp` and `management` REST API connection (unless the admin uri is explicitly set).
+
+`BrokerRunning` also provides a `static` method called `setEnvironmentVariableOverrides` that lets you can pass in a map containing these variables.
+They override system environment variables.
+This might be useful if you wish to use different configuration for tests in multiple test suites.
+IMPORTANT: The method must be called before invoking any of the `isRunning()` static methods that create the rule instance.
+Variable values are applied to all instances created after this invocation.
+Invoke `clearEnvironmentVariableOverrides()` to reset the rule to use defaults (including any actual environment variables).
+
+In your test cases, you can use the `brokerRunning` when creating the connection factory; `getConnectionFactory()` returns the rule’s RabbitMQ `ConnectionFactory`.
+The following example shows how to do so:
+
+```
+@Bean
+public CachingConnectionFactory rabbitConnectionFactory() {
+ return new CachingConnectionFactory(brokerRunning.getConnectionFactory());
+}
+```
+
+##### Using `LongRunningIntegrationTest`
+
+`LongRunningIntegrationTest` is a rule that disables long running tests.
+You might want to use this on a developer system but ensure that the rule is disabled on, for example, nightly CI builds.
+
+The following example shows its usage:
+
+```
+@Rule
+public LongRunningIntegrationTest longTests = new LongRunningIntegrationTest();
+```
+
+To disable the rule at runtime, set an environment variable called `RUN_LONG_INTEGRATION_TESTS` to `true`.
+
+#### 4.5.6. JUnit5 Conditions
+
+Version 2.0.2 introduced support for JUnit5.
+
+##### Using the `@RabbitAvailable` Annotation
+
+This class-level annotation is similar to the `BrokerRunning` `@Rule` discussed in [JUnit4 `@Rules`](#junit-rules).
+It is processed by the `RabbitAvailableCondition`.
+
+The annotation has three properties:
+
+* `queues`: An array of queues that are declared (and purged) before each test and deleted when all tests are complete.
+
+* `management`: Set this to `true` if your tests also require the management plugin installed on the broker.
+
+* `purgeAfterEach`: (Since version 2.2) when `true` (default), the `queues` will be purged between tests.
+
+It is used to check whether the broker is available and skip the tests if not.
+As discussed in [Configuring the Rule](#brokerRunning-configure), the environment variable called `RABBITMQ_SERVER_REQUIRED`, if `true`, causes the tests to fail fast if there is no broker.
+You can configure the condition by using environment variables as discussed in [Configuring the Rule](#brokerRunning-configure).
+
+In addition, the `RabbitAvailableCondition` supports argument resolution for parameterized test constructors and methods.
+Two argument types are supported:
+
+* `BrokerRunningSupport`: The instance (before 2.2, this was a JUnit 4 `BrokerRunning` instance)
+
+* `ConnectionFactory`: The `BrokerRunningSupport` instance’s RabbitMQ connection factory
+
+The following example shows both:
+
+```
+@RabbitAvailable(queues = "rabbitAvailableTests.queue")
+public class RabbitAvailableCTORInjectionTests {
+
+ private final ConnectionFactory connectionFactory;
+
+ public RabbitAvailableCTORInjectionTests(BrokerRunningSupport brokerRunning) {
+ this.connectionFactory = brokerRunning.getConnectionFactory();
+ }
+
+ @Test
+ public void test(ConnectionFactory cf) throws Exception {
+ assertSame(cf, this.connectionFactory);
+ Connection conn = this.connectionFactory.newConnection();
+ Channel channel = conn.createChannel();
+ DeclareOk declareOk = channel.queueDeclarePassive("rabbitAvailableTests.queue");
+ assertEquals(0, declareOk.getConsumerCount());
+ channel.close();
+ conn.close();
+ }
+
+}
+```
+
+The preceding test is in the framework itself and verifies the argument injection and that the condition created the queue properly.
+
+A practical user test might be as follows:
+
+```
+@RabbitAvailable(queues = "rabbitAvailableTests.queue")
+public class RabbitAvailableCTORInjectionTests {
+
+ private final CachingConnectionFactory connectionFactory;
+
+ public RabbitAvailableCTORInjectionTests(BrokerRunningSupport brokerRunning) {
+ this.connectionFactory =
+ new CachingConnectionFactory(brokerRunning.getConnectionFactory());
+ }
+
+ @Test
+ public void test() throws Exception {
+ RabbitTemplate template = new RabbitTemplate(this.connectionFactory);
+ ...
+ }
+}
+```
+
+When you use a Spring annotation application context within a test class, you can get a reference to the condition’s connection factory through a static method called `RabbitAvailableCondition.getBrokerRunning()`.
+
+| |Starting with version 2.2, `getBrokerRunning()` returns a `BrokerRunningSupport` object; previously, the JUnit 4 `BrokerRunnning` instance was returned. The new class has the same API as `BrokerRunning`.|
+|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The following test comes from the framework and demonstrates the usage:
+
+```
+@RabbitAvailable(queues = {
+ RabbitTemplateMPPIntegrationTests.QUEUE,
+ RabbitTemplateMPPIntegrationTests.REPLIES })
+@SpringJUnitConfig
+@DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
+public class RabbitTemplateMPPIntegrationTests {
+
+ public static final String QUEUE = "mpp.tests";
+
+ public static final String REPLIES = "mpp.tests.replies";
+
+ @Autowired
+ private RabbitTemplate template;
+
+ @Autowired
+ private Config config;
+
+ @Test
+ public void test() {
+
+ ...
+
+ }
+
+ @Configuration
+ @EnableRabbit
+ public static class Config {
+
+ @Bean
+ public CachingConnectionFactory cf() {
+ return new CachingConnectionFactory(RabbitAvailableCondition
+ .getBrokerRunning()
+ .getConnectionFactory());
+ }
+
+ @Bean
+ public RabbitTemplate template() {
+
+ ...
+
+ }
+
+ @Bean
+ public SimpleRabbitListenerContainerFactory
+ rabbitListenerContainerFactory() {
+
+ ...
+
+ }
+
+ @RabbitListener(queues = QUEUE)
+ public byte[] foo(byte[] in) {
+ return in;
+ }
+
+ }
+
+}
+```
+
+##### Using the `@LongRunning` Annotation
+
+Similar to the `LongRunningIntegrationTest` JUnit4 `@Rule`, this annotation causes tests to be skipped unless an environment variable (or system property) is set to `true`.
+The following example shows how to use it:
+
+```
+@RabbitAvailable(queues = SimpleMessageListenerContainerLongTests.QUEUE)
+@LongRunning
+public class SimpleMessageListenerContainerLongTests {
+
+ public static final String QUEUE = "SimpleMessageListenerContainerLongTests.queue";
+
+...
+
+}
+```
+
+By default, the variable is `RUN_LONG_INTEGRATION_TESTS`, but you can specify the variable name in the annotation’s `value` attribute.
+
+## 5. Spring Integration - Reference
+
+This part of the reference documentation provides a quick introduction to the AMQP support within the Spring Integration project.
+
+### 5.1. Spring Integration AMQP Support
+
+This brief chapter covers the relationship between the Spring Integration and the Spring AMQP projects.
+
+#### 5.1.1. Introduction
+
+The [Spring Integration](https://www.springsource.org/spring-integration) project includes AMQP Channel Adapters and Gateways that build upon the Spring AMQP project.
+Those adapters are developed and released in the Spring Integration project.
+In Spring Integration, “Channel Adapters” are unidirectional (one-way), whereas “Gateways” are bidirectional (request-reply).
+We provide an inbound-channel-adapter, an outbound-channel-adapter, an inbound-gateway, and an outbound-gateway.
+
+Since the AMQP adapters are part of the Spring Integration release, the documentation is available as part of the Spring Integration distribution.
+We provide a quick overview of the main features here.
+See the [Spring Integration Reference Guide](https://docs.spring.io/spring-integration/reference/htmlsingle/) for much more detail.
+
+#### 5.1.2. Inbound Channel Adapter
+
+To receive AMQP Messages from a queue, you can configure an ``.
+The following example shows how to configure an inbound channel adapter:
+
+```
+
+```
+
+#### 5.1.3. Outbound Channel Adapter
+
+To send AMQP Messages to an exchange, you can configure an ``.
+You can optionally provide a 'routing-key' in addition to the exchange name.
+The following example shows how to define an outbound channel adapter:
+
+```
+
+```
+
+#### 5.1.4. Inbound Gateway
+
+To receive an AMQP Message from a queue and respond to its reply-to address, you can configure an ``.
+The following example shows how to define an inbound gateway:
+
+```
+
+```
+
+#### 5.1.5. Outbound Gateway
+
+To send AMQP Messages to an exchange and receive back a response from a remote client, you can configure an ``.
+You can optionally provide a 'routing-key' in addition to the exchange name.
+The following example shows how to define an outbound gateway:
+
+```
+
+```
+
+## 6. Other Resources
+
+In addition to this reference documentation, there exist a number of other resources that may help you learn about AMQP.
+
+### 6.1. Further Reading
+
+For those who are not familiar with AMQP, the [specification](https://www.amqp.org/resources/download) is actually quite readable.
+It is, of course, the authoritative source of information, and the Spring AMQP code should be easy to understand for anyone who is familiar with the spec.
+Our current implementation of the RabbitMQ support is based on their 2.8.x version, and it officially supports AMQP 0.8 and 0.9.1.
+We recommend reading the 0.9.1 document.
+
+There are many great articles, presentations, and blogs available on the RabbitMQ [Getting Started](https://www.rabbitmq.com/how.html) page.
+Since that is currently the only supported implementation for Spring AMQP, we also recommend that as a general starting point for all broker-related concerns.
+
+## Appendix A: Change History
+
+This section describes what changes have been made as versions have changed.
+
+### A.1. Current Release
+
+See [What’s New](#whats-new).
+
+### A.2. Previous Releases
+
+#### A.2.1. Changes in 2.3 Since 2.2
+
+This section describes the changes between version 2.2 and version 2.3.
+See [Change History](#change-history) for changes in previous versions.
+
+##### Connection Factory Changes
+
+Two additional connection factories are now provided.
+See [Choosing a Connection Factory](#choosing-factory) for more information.
+
+##### `@RabbitListener` Changes
+
+You can now specify a reply content type.
+See [Reply ContentType](#reply-content-type) for more information.
+
+##### Message Converter Changes
+
+The `Jackson2JMessageConverter` s can now deserialize abstract classes (including interfaces) if the `ObjectMapper` is configured with a custom deserializer.
+See [Deserializing Abstract Classes](#jackson-abstract) for more information.
+
+##### Testing Changes
+
+A new annotation `@SpringRabbitTest` is provided to automatically configure some infrastructure beans for when you are not using `SpringBootTest`.
+See [@SpringRabbitTest](#spring-rabbit-test) for more information.
+
+##### RabbitTemplate Changes
+
+The template’s `ReturnCallback` has been refactored as `ReturnsCallback` for simpler use in lambda expressions.
+See [Correlated Publisher Confirms and Returns](#template-confirms) for more information.
+
+When using returns and correlated confirms, the `CorrelationData` now requires a unique `id` property.
+See [Correlated Publisher Confirms and Returns](#template-confirms) for more information.
+
+When using direct reply-to, you can now configure the template such that the server does not need to return correlation data with the reply.
+See [RabbitMQ Direct reply-to](#direct-reply-to) for more information.
+
+##### Listener Container Changes
+
+A new listener container property `consumeDelay` is now available; it is helpful when using the [RabbitMQ Sharding Plugin](https://github.com/rabbitmq/rabbitmq-sharding).
+
+The default `JavaLangErrorHandler` now calls `System.exit(99)`.
+To revert to the previous behavior (do nothing), add a no-op handler.
+
+The containers now support the `globalQos` property to apply the `prefetchCount` globally for the channel rather than for each consumer on the channel.
+
+See [Message Listener Container Configuration](#containerAttributes) for more information.
+
+##### MessagePostProcessor Changes
+
+The compressing `MessagePostProcessor` s now use a comma to separate multiple content encodings instead of a colon.
+The decompressors can handle both formats but, if you produce messages with this version that are consumed by versions earlier than 2.2.12, you should configure the compressor to use the old delimiter.
+See the IMPORTANT note in [Modifying Messages - Compression and More](#post-processing) for more information.
+
+##### Multiple Broker Support Improvements
+
+See [Multiple Broker (or Cluster) Support](#multi-rabbit) for more information.
+
+##### RepublishMessageRecoverer Changes
+
+A new subclass of this recoverer is not provided that supports publisher confirms.
+See [Message Listeners and the Asynchronous Case](#async-listeners) for more information.
+
+#### A.2.2. Changes in 2.2 Since 2.1
+
+This section describes the changes between version 2.1 and version 2.2.
+
+##### Package Changes
+
+The following classes/interfaces have been moved from `org.springframework.amqp.rabbit.core.support` to `org.springframework.amqp.rabbit.batch`:
+
+* `BatchingStrategy`
+
+* `MessageBatch`
+
+* `SimpleBatchingStrategy`
+
+In addition, `ListenerExecutionFailedException` has been moved from `org.springframework.amqp.rabbit.listener.exception` to `org.springframework.amqp.rabbit.support`.
+
+##### Dependency Changes
+
+JUnit (4) is now an optional dependency and will no longer appear as a transitive dependency.
+
+The `spring-rabbit-junit` module is now a **compile** dependency in the `spring-rabbit-test` module for a better target application development experience when with only a single `spring-rabbit-test` we get the full stack of testing utilities for AMQP components.
+
+##### "Breaking" API Changes
+
+the JUnit (5) `RabbitAvailableCondition.getBrokerRunning()` now returns a `BrokerRunningSupport` instance instead of a `BrokerRunning`, which depends on JUnit 4.
+It has the same API so it’s just a matter of changing the class name of any references.
+See [JUnit5 Conditions](#junit5-conditions) for more information.
+
+##### ListenerContainer Changes
+
+Messages with fatal exceptions are now rejected and NOT requeued, by default, even if the acknowledge mode is manual.
+See [Exception Handling](#exception-handling) for more information.
+
+Listener performance can now be monitored using Micrometer `Timer` s.
+See [Monitoring Listener Performance](#micrometer) for more information.
+
+##### @RabbitListener Changes
+
+You can now configure an `executor` on each listener, overriding the factory configuration, to more easily identify threads associated with the listener.
+You can now override the container factory’s `acknowledgeMode` property with the annotation’s `ackMode` property.
+See [overriding container factory properties](#listener-property-overrides) for more information.
+
+When using [batching](#receiving-batch), `@RabbitListener` methods can now receive a complete batch of messages in one call instead of getting them one-at-a-time.
+
+When receiving batched messages one-at-a-time, the last message has the `isLastInBatch` message property set to true.
+
+In addition, received batched messages now contain the `amqp_batchSize` header.
+
+Listeners can also consume batches created in the `SimpleMessageListenerContainer`, even if the batch is not created by the producer.
+See [Choosing a Container](#choose-container) for more information.
+
+Spring Data Projection interfaces are now supported by the `Jackson2JsonMessageConverter`.
+See [Using Spring Data Projection Interfaces](#data-projection) for more information.
+
+The `Jackson2JsonMessageConverter` now assumes the content is JSON if there is no `contentType` property, or it is the default (`application/octet-string`).
+See [Converting from a `Message`](#Jackson2JsonMessageConverter-from-message) for more information.
+
+Similarly. the `Jackson2XmlMessageConverter` now assumes the content is XML if there is no `contentType` property, or it is the default (`application/octet-string`).
+See [`Jackson2XmlMessageConverter`](#jackson2xml) for more information.
+
+When a `@RabbitListener` method returns a result, the bean and `Method` are now available in the reply message properties.
+This allows configuration of a `beforeSendReplyMessagePostProcessor` to, for example, set a header in the reply to indicate which method was invoked on the server.
+See [Reply Management](#async-annotation-driven-reply) for more information.
+
+You can now configure a `ReplyPostProcessor` to make modifications to a reply message before it is sent.
+See [Reply Management](#async-annotation-driven-reply) for more information.
+
+##### AMQP Logging Appenders Changes
+
+The Log4J and Logback `AmqpAppender` s now support a `verifyHostname` SSL option.
+
+Also these appenders now can be configured to not add MDC entries as headers.
+The `addMdcAsHeaders` boolean option has been introduces to configure such a behavior.
+
+The appenders now support the `SaslConfig` property.
+
+See [Logging Subsystem AMQP Appenders](#logging) for more information.
+
+##### MessageListenerAdapter Changes
+
+The `MessageListenerAdapter` provides now a new `buildListenerArguments(Object, Channel, Message)` method to build an array of arguments to be passed into target listener and an old one is deprecated.
+See [`MessageListenerAdapter`](#message-listener-adapter) for more information.
+
+##### Exchange/Queue Declaration Changes
+
+The `ExchangeBuilder` and `QueueBuilder` fluent APIs used to create `Exchange` and `Queue` objects for declaration by `RabbitAdmin` now support "well known" arguments.
+See [Builder API for Queues and Exchanges](#builder-api) for more information.
+
+The `RabbitAdmin` has a new property `explicitDeclarationsOnly`.
+See [Conditional Declaration](#conditional-declaration) for more information.
+
+##### Connection Factory Changes
+
+The `CachingConnectionFactory` has a new property `shuffleAddresses`.
+When providing a list of broker node addresses, the list will be shuffled before creating a connection so that the order in which the connections are attempted is random.
+See [Connecting to a Cluster](#cluster) for more information.
+
+When using Publisher confirms and returns, the callbacks are now invoked on the connection factory’s `executor`.
+This avoids a possible deadlock in the `amqp-clients` library if you perform rabbit operations from within the callback.
+See [Correlated Publisher Confirms and Returns](#template-confirms) for more information.
+
+Also, the publisher confirm type is now specified with the `ConfirmType` enum instead of the two mutually exclusive setter methods.
+
+The `RabbitConnectionFactoryBean` now uses TLS 1.2 by default when SSL is enabled.
+See [`RabbitConnectionFactoryBean` and Configuring SSL](#rabbitconnectionfactorybean-configuring-ssl) for more information.
+
+##### New MessagePostProcessor Classes
+
+Classes `DeflaterPostProcessor` and `InflaterPostProcessor` were added to support compression and decompression, respectively, when the message content-encoding is set to `deflate`.
+
+##### Other Changes
+
+The `Declarables` object (for declaring multiple queues, exchanges, bindings) now has a filtered getter for each type.
+See [Declaring Collections of Exchanges, Queues, and Bindings](#collection-declaration) for more information.
+
+You can now customize each `Declarable` bean before the `RabbitAdmin` processes the declaration thereof.
+See [Automatic Declaration of Exchanges, Queues, and Bindings](#automatic-declaration) for more information.
+
+`singleActiveConsumer()` has been added to the `QueueBuilder` to set the `x-single-active-consumer` queue argument.
+See [Builder API for Queues and Exchanges](#builder-api) for more information.
+
+Outbound headers with values of type `Class>` are now mapped using `getName()` instead of `toString()`.
+See [Message Properties Converters](#message-properties-converters) for more information.
+
+Recovery of failed producer-created batches is now supported.
+See [Retry with Batch Listeners](#batch-retry) for more information.
+
+#### A.2.3. Changes in 2.1 Since 2.0
+
+##### AMQP Client library
+
+Spring AMQP now uses the 5.4.x version of the `amqp-client` library provided by the RabbitMQ team.
+This client has auto-recovery configured by default.
+See [RabbitMQ Automatic Connection/Topology recovery](#auto-recovery).
+
+| |As of version 4.0, the client enables automatic recovery by default. While compatible with this feature, Spring AMQP has its own recovery mechanisms and the client recovery feature generally is not needed. We recommend disabling `amqp-client` automatic recovery, to avoid getting `AutoRecoverConnectionNotCurrentlyOpenException` instances when the broker is available but the connection has not yet recovered. Starting with version 1.7.1, Spring AMQP disables it unless you explicitly create your own RabbitMQ connection factory and provide it to the `CachingConnectionFactory`. RabbitMQ `ConnectionFactory` instances created by the `RabbitConnectionFactoryBean` also have the option disabled by default.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### Package Changes
+
+Certain classes have moved to different packages.
+Most are internal classes and do not affect user applications.
+Two exceptions are `ChannelAwareMessageListener` and `RabbitListenerErrorHandler`.
+These interfaces are now in `org.springframework.amqp.rabbit.listener.api`.
+
+##### Publisher Confirms Changes
+
+Channels enabled for publisher confirmations are not returned to the cache while there are outstanding confirmations.
+See [Correlated Publisher Confirms and Returns](#template-confirms) for more information.
+
+##### Listener Container Factory Improvements
+
+You can now use the listener container factories to create any listener container, not only those for use with `@RabbitListener` annotations or the `@RabbitListenerEndpointRegistry`.
+See [Using Container Factories](#using-container-factories) for more information.
+
+`ChannelAwareMessageListener` now inherits from `MessageListener`.
+
+##### Broker Event Listener
+
+A `BrokerEventListener` is introduced to publish selected broker events as `ApplicationEvent` instances.
+See [Broker Event Listener](#broker-events) for more information.
+
+##### RabbitAdmin Changes
+
+The `RabbitAdmin` discovers beans of type `Declarables` (which is a container for `Declarable` - `Queue`, `Exchange`, and `Binding` objects) and declare the contained objects on the broker.
+Users are discouraged from using the old mechanism of declaring `>` (and others) and should use `Declarables` beans instead.
+By default, the old mechanism is disabled.
+See [Declaring Collections of Exchanges, Queues, and Bindings](#collection-declaration) for more information.
+
+`AnonymousQueue` instances are now declared with `x-queue-master-locator` set to `client-local` by default, to ensure the queues are created on the node the application is connected to.
+See [Configuring the Broker](#broker-configuration) for more information.
+
+##### RabbitTemplate Changes
+
+You can now configure the `RabbitTemplate` with the `noLocalReplyConsumer` option to control a `noLocal` flag for reply consumers in the `sendAndReceive()` operations.
+See [Request/Reply Messaging](#request-reply) for more information.
+
+`CorrelationData` for publisher confirmations now has a `ListenableFuture`, which you can use to get the acknowledgment instead of using a callback.
+When returns and confirmations are enabled, the correlation data, if provided, is populated with the returned message.
+See [Correlated Publisher Confirms and Returns](#template-confirms) for more information.
+
+A method called `replyTimedOut` is now provided to notify subclasses that a reply has timed out, allowing for any state cleanup.
+See [Reply Timeout](#reply-timeout) for more information.
+
+You can now specify an `ErrorHandler` to be invoked when using request/reply with a `DirectReplyToMessageListenerContainer` (the default) when exceptions occur when replies are delivered (for example, late replies).
+See `setReplyErrorHandler` on the `RabbitTemplate`.
+(Also since 2.0.11).
+
+##### Message Conversion
+
+We introduced a new `Jackson2XmlMessageConverter` to support converting messages from and to XML format.
+See [`Jackson2XmlMessageConverter`](#jackson2xml) for more information.
+
+##### Management REST API
+
+The `RabbitManagementTemplate` is now deprecated in favor of the direct `com.rabbitmq.http.client.Client` (or `com.rabbitmq.http.client.ReactorNettyClient`) usage.
+See [RabbitMQ REST API](#management-rest-api) for more information.
+
+##### `@RabbitListener` Changes
+
+The listener container factory can now be configured with a `RetryTemplate` and, optionally, a `RecoveryCallback` used when sending replies.
+See [Enable Listener Endpoint Annotations](#async-annotation-driven-enable) for more information.
+
+##### Async `@RabbitListener` Return
+
+`@RabbitListener` methods can now return `ListenableFuture>` or `Mono>`.
+See [Asynchronous `@RabbitListener` Return Types](#async-returns) for more information.
+
+##### Connection Factory Bean Changes
+
+By default, the `RabbitConnectionFactoryBean` now calls `enableHostnameVerification()`.
+To revert to the previous behavior, set the `enableHostnameVerification` property to `false`.
+
+##### Connection Factory Changes
+
+The `CachingConnectionFactory` now unconditionally disables auto-recovery in the underlying RabbitMQ `ConnectionFactory`, even if a pre-configured instance is provided in a constructor.
+While steps have been taken to make Spring AMQP compatible with auto recovery, certain corner cases have arisen where issues remain.
+Spring AMQP has had its own recovery mechanism since 1.0.0 and does not need to use the recovery provided by the client.
+While it is still possible to enable the feature (using `cachingConnectionFactory.getRabbitConnectionFactory()` `.setAutomaticRecoveryEnabled()`) after the `CachingConnectionFactory` is constructed, **we strongly recommend that you not do so**.
+We recommend that you use a separate RabbitMQ `ConnectionFactory` if you need auto recovery connections when using the client factory directly (rather than using Spring AMQP components).
+
+##### Listener Container Changes
+
+The default `ConditionalRejectingErrorHandler` now completely discards messages that cause fatal errors if an `x-death` header is present.
+See [Exception Handling](#exception-handling) for more information.
+
+##### Immediate requeue
+
+A new `ImmediateRequeueAmqpException` is introduced to notify a listener container that the message has to be re-queued.
+To use this feature, a new `ImmediateRequeueMessageRecoverer` implementation is added.
+
+See [Message Listeners and the Asynchronous Case](#async-listeners) for more information.
+
+#### A.2.4. Changes in 2.0 Since 1.7
+
+##### Using `CachingConnectionFactory`
+
+Starting with version 2.0.2, you can configure the `RabbitTemplate` to use a different connection to that used by listener containers.
+This change avoids deadlocked consumers when producers are blocked for any reason.
+See [Using a Separate Connection](#separate-connection) for more information.
+
+##### AMQP Client library
+
+Spring AMQP now uses the new 5.0.x version of the `amqp-client` library provided by the RabbitMQ team.
+This client has auto recovery configured by default.
+See [RabbitMQ Automatic Connection/Topology recovery](#auto-recovery).
+
+| |As of version 4.0, the client enables automatic recovery by default. While compatible with this feature, Spring AMQP has its own recovery mechanisms, and the client recovery feature generally is not needed. We recommend that you disable `amqp-client` automatic recovery, to avoid getting `AutoRecoverConnectionNotCurrentlyOpenException` instances when the broker is available but the connection has not yet recovered. Starting with version 1.7.1, Spring AMQP disables it unless you explicitly create your own RabbitMQ connection factory and provide it to the `CachingConnectionFactory`. RabbitMQ `ConnectionFactory` instances created by the `RabbitConnectionFactoryBean` also have the option disabled by default.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### General Changes
+
+The `ExchangeBuilder` now builds durable exchanges by default.
+The `@Exchange` annotation used within a `@QeueueBinding` also declares durable exchanges by default.
+The `@Queue` annotation used within a `@RabbitListener` by default declares durable queues if named and non-durable if anonymous.
+See [Builder API for Queues and Exchanges](#builder-api) and [Annotation-driven Listener Endpoints](#async-annotation-driven) for more information.
+
+##### Deleted Classes
+
+`UniquelyNameQueue` is no longer provided.
+It is unusual to create a durable non-auto-delete queue with a unique name.
+This class has been deleted.
+If you require its functionality, use `new Queue(UUID.randomUUID().toString())`.
+
+##### New Listener Container
+
+The `DirectMessageListenerContainer` has been added alongside the existing `SimpleMessageListenerContainer`.
+See [Choosing a Container](#choose-container) and [Message Listener Container Configuration](#containerAttributes) for information about choosing which container to use as well as how to configure them.
+
+##### Log4j Appender
+
+This appender is no longer available due to the end-of-life of log4j.
+See [Logging Subsystem AMQP Appenders](#logging) for information about the available log appenders.
+
+##### `RabbitTemplate` Changes
+
+| |Previously, a non-transactional `RabbitTemplate` participated in an existing transaction if it ran on a transactional listener container thread. This was a serious bug. However, users might have relied on this behavior. Starting with version 1.6.2, you must set the `channelTransacted` boolean on the template for it to participate in the container transaction.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The `RabbitTemplate` now uses a `DirectReplyToMessageListenerContainer` (by default) instead of creating a new consumer for each request.
+See [RabbitMQ Direct reply-to](#direct-reply-to) for more information.
+
+The `AsyncRabbitTemplate` now supports direct reply-to.
+See [Async Rabbit Template](#async-template) for more information.
+
+The `RabbitTemplate` and `AsyncRabbitTemplate` now have `receiveAndConvert` and `convertSendAndReceiveAsType` methods that take a `ParameterizedTypeReference` argument, letting the caller specify the type to which to convert the result.
+This is particularly useful for complex types or when type information is not conveyed in message headers.
+It requires a `SmartMessageConverter` such as the `Jackson2JsonMessageConverter`.
+See [Receiving Messages](#receiving-messages), [Request/Reply Messaging](#request-reply), [Async Rabbit Template](#async-template), and [Converting From a `Message` With `RabbitTemplate`](#json-complex) for more information.
+
+You can now use a `RabbitTemplate` to perform multiple operations on a dedicated channel.
+See [Scoped Operations](#scoped-operations) for more information.
+
+##### Listener Adapter
+
+A convenient `FunctionalInterface` is available for using lambdas with the `MessageListenerAdapter`.
+See [`MessageListenerAdapter`](#message-listener-adapter) for more information.
+
+##### Listener Container Changes
+
+###### Prefetch Default Value
+
+The prefetch default value used to be 1, which could lead to under-utilization of efficient consumers.
+The default prefetch value is now 250, which should keep consumers busy in most common scenarios and,
+thus, improve throughput.
+
+| |There are scenarios where the prefetch value should be low — for example, with large messages, especially if the processing is slow (messages could add up to a large amount of memory in the client process), and if strict message ordering is necessary (the prefetch value should be set back to 1 in this case). Also, with low-volume messaging and multiple consumers (including concurrency within a single listener container instance), you may wish to reduce the prefetch to get a more even distribution of messages across consumers.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+For more background about prefetch, see this post about [consumer utilization in RabbitMQ](https://www.rabbitmq.com/blog/2014/04/14/finding-bottlenecks-with-rabbitmq-3-3/)and this post about [queuing theory](https://www.rabbitmq.com/blog/2012/05/11/some-queuing-theory-throughput-latency-and-bandwidth/).
+
+###### Message Count
+
+Previously, `MessageProperties.getMessageCount()` returned `0` for messages emitted by the container.
+This property applies only when you use `basicGet` (for example, from `RabbitTemplate.receive()` methods) and is now initialized to `null` for container messages.
+
+###### Transaction Rollback Behavior
+
+Message re-queue on transaction rollback is now consistent, regardless of whether or not a transaction manager is configured.
+See [A note on Rollback of Received Messages](#transaction-rollback) for more information.
+
+###### Shutdown Behavior
+
+If the container threads do not respond to a shutdown within `shutdownTimeout`, the channels are forced closed by default.
+See [Message Listener Container Configuration](#containerAttributes) for more information.
+
+###### After Receive Message Post Processors
+
+If a `MessagePostProcessor` in the `afterReceiveMessagePostProcessors` property returns `null`, the message is discarded (and acknowledged if appropriate).
+
+##### Connection Factory Changes
+
+The connection and channel listener interfaces now provide a mechanism to obtain information about exceptions.
+See [Connection and Channel Listeners](#connection-channel-listeners) and [Publishing is Asynchronous — How to Detect Successes and Failures](#publishing-is-async) for more information.
+
+A new `ConnectionNameStrategy` is now provided to populate the application-specific identification of the target RabbitMQ connection from the `AbstractConnectionFactory`.
+See [Connection and Resource Management](#connections) for more information.
+
+##### Retry Changes
+
+The `MissingMessageIdAdvice` is no longer provided.
+Its functionality is now built-in.
+See [Failures in Synchronous Operations and Options for Retry](#retry) for more information.
+
+##### Anonymous Queue Naming
+
+By default, `AnonymousQueues` are now named with the default `Base64UrlNamingStrategy` instead of a simple `UUID` string.
+See [`AnonymousQueue`](#anonymous-queue) for more information.
+
+##### `@RabbitListener` Changes
+
+You can now provide simple queue declarations (bound only to the default exchange) in `@RabbitListener` annotations.
+See [Annotation-driven Listener Endpoints](#async-annotation-driven) for more information.
+
+You can now configure `@RabbitListener` annotations so that any exceptions are returned to the sender.
+You can also configure a `RabbitListenerErrorHandler` to handle exceptions.
+See [Handling Exceptions](#annotation-error-handling) for more information.
+
+You can now bind a queue with multiple routing keys when you use the `@QueueBinding` annotation.
+Also `@QueueBinding.exchange()` now supports custom exchange types and declares durable exchanges by default.
+
+You can now set the `concurrency` of the listener container at the annotation level rather than having to configure a different container factory for different concurrency settings.
+
+You can now set the `autoStartup` property of the listener container at the annotation level, overriding the default setting in the container factory.
+
+You can now set after receive and before send (reply) `MessagePostProcessor` instances in the `RabbitListener` container factories.
+
+See [Annotation-driven Listener Endpoints](#async-annotation-driven) for more information.
+
+Starting with version 2.0.3, one of the `@RabbitHandler` annotations on a class-level `@RabbitListener` can be designated as the default.
+See [Multi-method Listeners](#annotation-method-selection) for more information.
+
+##### Container Conditional Rollback
+
+When using an external transaction manager (such as JDBC), rule-based rollback is now supported when you provide the container with a transaction attribute.
+It is also now more flexible when you use a transaction advice.
+See [Conditional Rollback](#conditional-rollback) for more information.
+
+##### Remove Jackson 1.x support
+
+Deprecated in previous versions, Jackson `1.x` converters and related components have now been deleted.
+You can use similar components based on Jackson 2.x.
+See [Jackson2JsonMessageConverter](#json-message-converter) for more information.
+
+##### JSON Message Converter
+
+When the `*TypeId*` is set to `Hashtable` for an inbound JSON message, the default conversion type is now `LinkedHashMap`.
+Previously, it was `Hashtable`.
+To revert to a `Hashtable`, you can use `setDefaultMapType` on the `DefaultClassMapper`.
+
+##### XML Parsers
+
+When parsing `Queue` and `Exchange` XML components, the parsers no longer register the `name` attribute value as a bean alias if an `id` attribute is present.
+See [A Note On the `id` and `name` Attributes](#note-id-name) for more information.
+
+##### Blocked Connection
+
+You can now inject the `com.rabbitmq.client.BlockedListener` into the `org.springframework.amqp.rabbit.connection.Connection` object.
+Also, the `ConnectionBlockedEvent` and `ConnectionUnblockedEvent` events are emitted by the `ConnectionFactory` when the connection is blocked or unblocked by the Broker.
+
+See [Connection and Resource Management](#connections) for more information.
+
+#### A.2.5. Changes in 1.7 Since 1.6
+
+##### AMQP Client library
+
+Spring AMQP now uses the new 4.0.x version of the `amqp-client` library provided by the RabbitMQ team.
+This client has auto-recovery configured by default.
+See [RabbitMQ Automatic Connection/Topology recovery](#auto-recovery).
+
+| |The 4.0.x client enables automatic recovery by default. While compatible with this feature, Spring AMQP has its own recovery mechanisms, and the client recovery feature generally is not needed. We recommend disabling `amqp-client` automatic recovery, to avoid getting `AutoRecoverConnectionNotCurrentlyOpenException` instances when the broker is available but the connection has not yet recovered. Starting with version 1.7.1, Spring AMQP disables it unless you explicitly create your own RabbitMQ connection factory and provide it to the `CachingConnectionFactory`. RabbitMQ `ConnectionFactory` instances created by the `RabbitConnectionFactoryBean` also have the option disabled by default.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### Log4j 2 upgrade
+
+The minimum Log4j 2 version (for the `AmqpAppender`) is now `2.7`.
+The framework is no longer compatible with previous versions.
+See [Logging Subsystem AMQP Appenders](#logging) for more information.
+
+##### Logback Appender
+
+This appender no longer captures caller data (method, line number) by default.
+You can re-enable it by setting the `includeCallerData` configuration option.
+See [Logging Subsystem AMQP Appenders](#logging) for information about the available log appenders.
+
+##### Spring Retry Upgrade
+
+The minimum Spring Retry version is now `1.2`.
+The framework is no longer compatible with previous versions.
+
+###### Shutdown Behavior
+
+You can now set `forceCloseChannel` to `true` so that, if the container threads do not respond to a shutdown within `shutdownTimeout`, the channels are forced closed,
+causing any unacked messages to be re-queued.
+See [Message Listener Container Configuration](#containerAttributes) for more information.
+
+##### FasterXML Jackson upgrade
+
+The minimum Jackson version is now `2.8`.
+The framework is no longer compatible with previous versions.
+
+##### JUnit `@Rules`
+
+Rules that have previously been used internally by the framework have now been made available in a separate jar called `spring-rabbit-junit`.
+See [JUnit4 `@Rules`](#junit-rules) for more information.
+
+##### Container Conditional Rollback
+
+When you use an external transaction manager (such as JDBC), rule-based rollback is now supported when you provide the container with a transaction attribute.
+It is also now more flexible when you use a transaction advice.
+
+##### Connection Naming Strategy
+
+A new `ConnectionNameStrategy` is now provided to populate the application-specific identification of the target RabbitMQ connection from the `AbstractConnectionFactory`.
+See [Connection and Resource Management](#connections) for more information.
+
+##### Listener Container Changes
+
+###### Transaction Rollback Behavior
+
+You can now configure message re-queue on transaction rollback to be consistent, regardless of whether or not a transaction manager is configured.
+See [A note on Rollback of Received Messages](#transaction-rollback) for more information.
+
+#### A.2.6. Earlier Releases
+
+See [Previous Releases](#previous-whats-new) for changes in previous versions.
+
+#### A.2.7. Changes in 1.6 Since 1.5
+
+##### Testing Support
+
+A new testing support library is now provided.
+See [Testing Support](#testing) for more information.
+
+##### Builder
+
+Builders that provide a fluent API for configuring `Queue` and `Exchange` objects are now available.
+See [Builder API for Queues and Exchanges](#builder-api) for more information.
+
+##### Namespace Changes
+
+###### Connection Factory
+
+You can now add a `thread-factory` to a connection factory bean declaration — for example, to name the threads
+created by the `amqp-client` library.
+See [Connection and Resource Management](#connections) for more information.
+
+When you use `CacheMode.CONNECTION`, you can now limit the total number of connections allowed.
+See [Connection and Resource Management](#connections) for more information.
+
+###### Queue Definitions
+
+You can now provide a naming strategy for anonymous queues.
+See [`AnonymousQueue`](#anonymous-queue) for more information.
+
+##### Listener Container Changes
+
+###### Idle Message Listener Detection
+
+You can now configure listener containers to publish `ApplicationEvent` instances when idle.
+See [Detecting Idle Asynchronous Consumers](#idle-containers) for more information.
+
+###### Mismatched Queue Detection
+
+By default, when a listener container starts, if queues with mismatched properties or arguments are detected,
+the container logs the exception but continues to listen.
+The container now has a property called `mismatchedQueuesFatal`, which prevents the container (and context) from
+starting if the problem is detected during startup.
+It also stops the container if the problem is detected later, such as after recovering from a connection failure.
+See [Message Listener Container Configuration](#containerAttributes) for more information.
+
+###### Listener Container Logging
+
+Now, listener container provides its `beanName` to the internal `SimpleAsyncTaskExecutor` as a `threadNamePrefix`.
+It is useful for logs analysis.
+
+###### Default Error Handler
+
+The default error handler (`ConditionalRejectingErrorHandler`) now considers irrecoverable `@RabbitListener`exceptions as fatal.
+See [Exception Handling](#exception-handling) for more information.
+
+##### `AutoDeclare` and `RabbitAdmin` Instances
+
+See [Message Listener Container Configuration](#containerAttributes) (`autoDeclare`) for some changes to the semantics of that option with respect to the use
+of `RabbitAdmin` instances in the application context.
+
+##### `AmqpTemplate`: Receive with Timeout
+
+A number of new `receive()` methods with `timeout` have been introduced for the `AmqpTemplate`and its `RabbitTemplate` implementation.
+See [Polling Consumer](#polling-consumer) for more information.
+
+##### Using `AsyncRabbitTemplate`
+
+A new `AsyncRabbitTemplate` has been introduced.
+This template provides a number of send and receive methods, where the return value is a `ListenableFuture`, which can
+be used later to obtain the result either synchronously or asynchronously.
+See [Async Rabbit Template](#async-template) for more information.
+
+##### `RabbitTemplate` Changes
+
+1.4.1 introduced the ability to use [direct reply-to](https://www.rabbitmq.com/direct-reply-to.html) when the broker supports it.
+It is more efficient than using a temporary queue for each reply.
+This version lets you override this default behavior and use a temporary queue by setting the `useTemporaryReplyQueues` property to `true`.
+See [RabbitMQ Direct reply-to](#direct-reply-to) for more information.
+
+The `RabbitTemplate` now supports a `user-id-expression` (`userIdExpression` when using Java configuration).
+See [Validated User-ID RabbitMQ documentation](https://www.rabbitmq.com/validated-user-id.html) and [Validated User Id](#template-user-id) for more information.
+
+##### Message Properties
+
+###### Using `CorrelationId`
+
+The `correlationId` message property can now be a `String`.
+See [Message Properties Converters](#message-properties-converters) for more information.
+
+###### Long String Headers
+
+Previously, the `DefaultMessagePropertiesConverter` “converted” headers longer than the long string limit (default 1024)
+to a `DataInputStream` (actually, it referenced the `LongString` instance’s `DataInputStream`).
+On output, this header was not converted (except to a String — for example, `[[email protected]](/cdn-cgi/l/email-protection)` by calling`toString()` on the stream).
+
+With this release, long `LongString` instances are now left as `LongString` instances by default.
+You can access the contents by using the `getBytes[]`, `toString()`, or `getStream()` methods.
+A large incoming `LongString` is now correctly “converted” on output too.
+
+See [Message Properties Converters](#message-properties-converters) for more information.
+
+###### Inbound Delivery Mode
+
+The `deliveryMode` property is no longer mapped to the `MessageProperties.deliveryMode`.
+This change avoids unintended propagation if the the same `MessageProperties` object is used to send an outbound message.
+Instead, the inbound `deliveryMode` header is mapped to `MessageProperties.receivedDeliveryMode`.
+
+See [Message Properties Converters](#message-properties-converters) for more information.
+
+When using annotated endpoints, the header is provided in the header named `AmqpHeaders.RECEIVED_DELIVERY_MODE`.
+
+See [Annotated Endpoint Method Signature](#async-annotation-driven-enable-signature) for more information.
+
+###### Inbound User ID
+
+The `user_id` property is no longer mapped to the `MessageProperties.userId`.
+This change avoids unintended propagation if the the same `MessageProperties` object is used to send an outbound message.
+Instead, the inbound `userId` header is mapped to `MessageProperties.receivedUserId`.
+
+See [Message Properties Converters](#message-properties-converters) for more information.
+
+When you use annotated endpoints, the header is provided in the header named `AmqpHeaders.RECEIVED_USER_ID`.
+
+See [Annotated Endpoint Method Signature](#async-annotation-driven-enable-signature) for more information.
+
+##### `RabbitAdmin` Changes
+
+###### Declaration Failures
+
+Previously, the `ignoreDeclarationFailures` flag took effect only for `IOException` on the channel (such as mis-matched
+arguments).
+It now takes effect for any exception (such as `TimeoutException`).
+In addition, a `DeclarationExceptionEvent` is now published whenever a declaration fails.
+The `RabbitAdmin` last declaration event is also available as a property `lastDeclarationExceptionEvent`.
+See [Configuring the Broker](#broker-configuration) for more information.
+
+##### `@RabbitListener` Changes
+
+###### Multiple Containers for Each Bean
+
+When you use Java 8 or later, you can now add multiple `@RabbitListener` annotations to `@Bean` classes or
+their methods.
+When using Java 7 or earlier, you can use the `@RabbitListeners` container annotation to provide the same
+functionality.
+See [`@Repeatable` `@RabbitListener`](#repeatable-rabbit-listener) for more information.
+
+###### `@SendTo` SpEL Expressions
+
+`@SendTo` for routing replies with no `replyTo` property can now be SpEL expressions evaluated against the
+request/reply.
+See [Reply Management](#async-annotation-driven-reply) for more information.
+
+###### `@QueueBinding` Improvements
+
+You can now specify arguments for queues, exchanges, and bindings in `@QueueBinding` annotations.
+Header exchanges are now supported by `@QueueBinding`.
+See [Annotation-driven Listener Endpoints](#async-annotation-driven) for more information.
+
+##### Delayed Message Exchange
+
+Spring AMQP now has first class support for the RabbitMQ Delayed Message Exchange plugin.
+See [Delayed Message Exchange](#delayed-message-exchange) for more information.
+
+##### Exchange Internal Flag
+
+Any `Exchange` definitions can now be marked as `internal`, and `RabbitAdmin` passes the value to the broker when
+declaring the exchange.
+See [Configuring the Broker](#broker-configuration) for more information.
+
+##### `CachingConnectionFactory` Changes
+
+###### `CachingConnectionFactory` Cache Statistics
+
+The `CachingConnectionFactory` now provides cache properties at runtime and over JMX.
+See [Runtime Cache Properties](#runtime-cache-properties) for more information.
+
+###### Accessing the Underlying RabbitMQ Connection Factory
+
+A new getter has been added to provide access to the underlying factory.
+You can use this getter, for example, to add custom connection properties.
+See [Adding Custom Client Connection Properties](#custom-client-props) for more information.
+
+###### Channel Cache
+
+The default channel cache size has been increased from 1 to 25.
+See [Connection and Resource Management](#connections) for more information.
+
+In addition, the `SimpleMessageListenerContainer` no longer adjusts the cache size to be at least as large as the number
+of `concurrentConsumers` — this was superfluous, since the container consumer channels are never cached.
+
+##### Using `RabbitConnectionFactoryBean`
+
+The factory bean now exposes a property to add client connection properties to connections made by the resulting
+factory.
+
+##### Java Deserialization
+
+You can now configure a “allowed list” of allowable classes when you use Java deserialization.
+You should consider creating an allowed list if you accept messages with serialized java objects from
+untrusted sources.
+See [Java Deserialization](#java-deserialization) for more information.
+
+##### JSON `MessageConverter`
+
+Improvements to the JSON message converter now allow the consumption of messages that do not have type information
+in message headers.
+See [Message Conversion for Annotated Methods](#async-annotation-conversion) and [Jackson2JsonMessageConverter](#json-message-converter) for more information.
+
+##### Logging Appenders
+
+###### Log4j 2
+
+A log4j 2 appender has been added, and the appenders can now be configured with an `addresses` property to connect
+to a broker cluster.
+
+###### Client Connection Properties
+
+You can now add custom client connection properties to RabbitMQ connections.
+
+See [Logging Subsystem AMQP Appenders](#logging) for more information.
+
+#### A.2.8. Changes in 1.5 Since 1.4
+
+##### `spring-erlang` Is No Longer Supported
+
+The `spring-erlang` jar is no longer included in the distribution.
+Use [the RabbitMQ REST API](#management-rest-api) instead.
+
+##### `CachingConnectionFactory` Changes
+
+###### Empty Addresses Property in `CachingConnectionFactory`
+
+Previously, if the connection factory was configured with a host and port but an empty String was also supplied for`addresses`, the host and port were ignored.
+Now, an empty `addresses` String is treated the same as a `null`, and the host and port are used.
+
+###### URI Constructor
+
+The `CachingConnectionFactory` has an additional constructor, with a `URI` parameter, to configure the broker connection.
+
+###### Connection Reset
+
+A new method called `resetConnection()` has been added to let users reset the connection (or connections).
+You might use this, for example, to reconnect to the primary broker after failing over to the secondary broker.
+This **does** impact in-process operations.
+The existing `destroy()` method does exactly the same, but the new method has a less daunting name.
+
+##### Properties to Control Container Queue Declaration Behavior
+
+When the listener container consumers start, they attempt to passively declare the queues to ensure they are available
+on the broker.
+Previously, if these declarations failed (for example, because the queues didn’t exist) or when an HA queue was being
+moved, the retry logic was fixed at three retry attempts at five-second intervals.
+If the queues still do not exist, the behavior is controlled by the `missingQueuesFatal` property (default: `true`).
+Also, for containers configured to listen from multiple queues, if only a subset of queues are available, the consumer
+retried the missing queues on a fixed interval of 60 seconds.
+
+The `declarationRetries`, `failedDeclarationRetryInterval`, and `retryDeclarationInterval` properties are now configurable.
+See [Message Listener Container Configuration](#containerAttributes) for more information.
+
+##### Class Package Change
+
+The `RabbitGatewaySupport` class has been moved from `o.s.amqp.rabbit.core.support` to `o.s.amqp.rabbit.core`.
+
+##### `DefaultMessagePropertiesConverter` Changes
+
+You can now configure the `DefaultMessagePropertiesConverter` to
+determine the maximum length of a `LongString` that is converted
+to a `String` rather than to a `DataInputStream`.
+The converter has an alternative constructor that takes the value as a limit.
+Previously, this limit was hard-coded at `1024` bytes.
+(Also available in 1.4.4).
+
+##### `@RabbitListener` Improvements
+
+###### `@QueueBinding` for `@RabbitListener`
+
+The `bindings` attribute has been added to the `@RabbitListener` annotation as mutually exclusive with the `queues`attribute to allow the specification of the `queue`, its `exchange`, and `binding` for declaration by a `RabbitAdmin` on
+the Broker.
+
+###### SpEL in `@SendTo`
+
+The default reply address (`@SendTo`) for a `@RabbitListener` can now be a SpEL expression.
+
+###### Multiple Queue Names through Properties
+
+You can now use a combination of SpEL and property placeholders to specify multiple queues for a listener.
+
+See [Annotation-driven Listener Endpoints](#async-annotation-driven) for more information.
+
+##### Automatic Exchange, Queue, and Binding Declaration
+
+You can now declare beans that define a collection of these entities, and the `RabbitAdmin` adds the
+contents to the list of entities that it declares when a connection is established.
+See [Declaring Collections of Exchanges, Queues, and Bindings](#collection-declaration) for more information.
+
+##### `RabbitTemplate` Changes
+
+###### `reply-address` Added
+
+The `reply-address` attribute has been added to the `` component as an alternative `reply-queue`.
+See [Request/Reply Messaging](#request-reply) for more information.
+(Also available in 1.4.4 as a setter on the `RabbitTemplate`).
+
+###### Blocking `receive` Methods
+
+The `RabbitTemplate` now supports blocking in `receive` and `convertAndReceive` methods.
+See [Polling Consumer](#polling-consumer) for more information.
+
+###### Mandatory with `sendAndReceive` Methods
+
+When the `mandatory` flag is set when using the `sendAndReceive` and `convertSendAndReceive` methods, the calling thread
+throws an `AmqpMessageReturnedException` if the request message cannot be deliverted.
+See [Reply Timeout](#reply-timeout) for more information.
+
+###### Improper Reply Listener Configuration
+
+The framework tries to verify proper configuration of a reply listener container when using a named reply queue.
+
+See [Reply Listener Container](#reply-listener) for more information.
+
+##### `RabbitManagementTemplate` Added
+
+The `RabbitManagementTemplate` has been introduced to monitor and configure the RabbitMQ Broker by using the REST API provided by its [management plugin](https://www.rabbitmq.com/management.html).
+See [RabbitMQ REST API](#management-rest-api) for more information.
+
+#####
+
+| |The `id` attribute on the ` ` element has been removed. Starting with this release, the `id` on the ` ` child element is used alone to name the listener container bean created for each listener element. Normal Spring bean name overrides are applied. If a later ` ` is parsed with the same `id` as an existing bean, the new definition overrides the existing one. Previously, bean names were composed from the `id` attributes of the ` ` and ` ` elements. When migrating to this release, if you have `id` attributes on your ` ` elements, remove them and set the `id` on the child ` ` element instead.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+However, to support starting and stopping containers as a group, a new `group` attribute has been added.
+When this attribute is defined, the containers created by this element are added to a bean with this name, of type `Collection`.
+You can iterate over this group to start and stop containers.
+
+##### Class-Level `@RabbitListener`
+
+The `@RabbitListener` annotation can now be applied at the class level.
+Together with the new `@RabbitHandler` method annotation, this lets you select the handler method based on payload type.
+See [Multi-method Listeners](#annotation-method-selection) for more information.
+
+##### `SimpleMessageListenerContainer`: BackOff Support
+
+The `SimpleMessageListenerContainer` can now be supplied with a `BackOff` instance for `consumer` startup recovery.
+See [Message Listener Container Configuration](#containerAttributes) for more information.
+
+##### Channel Close Logging
+
+A mechanism to control the log levels of channel closure has been introduced.
+See [Logging Channel Close Events](#channel-close-logging).
+
+##### Application Events
+
+The `SimpleMessageListenerContainer` now emits application events when consumers fail.
+See [Consumer Events](#consumer-events) for more information.
+
+##### Consumer Tag Configuration
+
+Previously, the consumer tags for asynchronous consumers were generated by the broker.
+With this release, it is now possible to supply a naming strategy to the listener container.
+See [Consumer Tags](#consumerTags).
+
+##### Using `MessageListenerAdapter`
+
+The `MessageListenerAdapter` now supports a map of queue names (or consumer tags) to method names, to determine
+which delegate method to call based on the queue from which the message was received.
+
+##### `LocalizedQueueConnectionFactory` Added
+
+`LocalizedQueueConnectionFactory` is a new connection factory that connects to the node in a cluster where a mirrored queue actually resides.
+
+See [Queue Affinity and the `LocalizedQueueConnectionFactory`](#queue-affinity).
+
+##### Anonymous Queue Naming
+
+Starting with version 1.5.3, you can now control how `AnonymousQueue` names are generated.
+See [`AnonymousQueue`](#anonymous-queue) for more information.
+
+#### A.2.9. Changes in 1.4 Since 1.3
+
+##### `@RabbitListener` Annotation
+
+POJO listeners can be annotated with `@RabbitListener`, enabled by `@EnableRabbit` or ` `.
+Spring Framework 4.1 is required for this feature.
+See [Annotation-driven Listener Endpoints](#async-annotation-driven) for more information.
+
+##### `RabbitMessagingTemplate` Added
+
+A new `RabbitMessagingTemplate` lets you interact with RabbitMQ by using `spring-messaging` `Message` instances.
+Internally, it uses the `RabbitTemplate`, which you can configure as normal.
+Spring Framework 4.1 is required for this feature.
+See [Messaging Integration](#template-messaging) for more information.
+
+##### Listener Container `missingQueuesFatal` Attribute
+
+1.3.5 introduced the `missingQueuesFatal` property on the `SimpleMessageListenerContainer`.
+This is now available on the listener container namespace element.
+See [Message Listener Container Configuration](#containerAttributes).
+
+##### RabbitTemplate `ConfirmCallback` Interface
+
+The `confirm` method on this interface has an additional parameter called `cause`.
+When available, this parameter contains the reason for a negative acknowledgement (nack).
+See [Correlated Publisher Confirms and Returns](#template-confirms).
+
+##### `RabbitConnectionFactoryBean` Added
+
+`RabbitConnectionFactoryBean` creates the underlying RabbitMQ `ConnectionFactory` used by the `CachingConnectionFactory`.
+This enables configuration of SSL options using Spring’s dependency injection.
+See [Configuring the Underlying Client Connection Factory](#connection-factory).
+
+##### Using `CachingConnectionFactory`
+
+The `CachingConnectionFactory` now lets the `connectionTimeout` be set as a property or as an attribute in the namespace.
+It sets the property on the underlying RabbitMQ `ConnectionFactory`.
+See [Configuring the Underlying Client Connection Factory](#connection-factory).
+
+##### Log Appender
+
+The Logback `org.springframework.amqp.rabbit.logback.AmqpAppender` has been introduced.
+It provides options similar to `org.springframework.amqp.rabbit.log4j.AmqpAppender`.
+For more information, see the JavaDoc of these classes.
+
+The Log4j `AmqpAppender` now supports the `deliveryMode` property (`PERSISTENT` or `NON_PERSISTENT`, default: `PERSISTENT`).
+Previously, all log4j messages were `PERSISTENT`.
+
+The appender also supports modification of the `Message` before sending — allowing, for example, the addition of custom headers.
+Subclasses should override the `postProcessMessageBeforeSend()`.
+
+##### Listener Queues
+
+The listener container now, by default, redeclares any missing queues during startup.
+A new `auto-declare` attribute has been added to the `` to prevent these re-declarations.
+See [`auto-delete` Queues](#lc-auto-delete).
+
+##### `RabbitTemplate`: `mandatory` and `connectionFactorySelector` Expressions
+
+The `mandatoryExpression`, `sendConnectionFactorySelectorExpression`, and `receiveConnectionFactorySelectorExpression` SpEL Expression`s properties have been added to `RabbitTemplate`.
+The `mandatoryExpression` is used to evaluate a `mandatory` boolean value against each request message when a `ReturnCallback` is in use.
+See [Correlated Publisher Confirms and Returns](#template-confirms).
+The `sendConnectionFactorySelectorExpression` and `receiveConnectionFactorySelectorExpression` are used when an `AbstractRoutingConnectionFactory` is provided, to determine the `lookupKey` for the target `ConnectionFactory` at runtime on each AMQP protocol interaction operation.
+See [Routing Connection Factory](#routing-connection-factory).
+
+##### Listeners and the Routing Connection Factory
+
+You can configure a `SimpleMessageListenerContainer` with a routing connection factory to enable connection selection based on the queue names.
+See [Routing Connection Factory](#routing-connection-factory).
+
+##### `RabbitTemplate`: `RecoveryCallback` Option
+
+The `recoveryCallback` property has been added for use in the `retryTemplate.execute()`.
+See [Adding Retry Capabilities](#template-retry).
+
+##### `MessageConversionException` Change
+
+This exception is now a subclass of `AmqpException`.
+Consider the following code:
+
+```
+try {
+ template.convertAndSend("thing1", "thing2", "cat");
+}
+catch (AmqpException e) {
+ ...
+}
+catch (MessageConversionException e) {
+ ...
+}
+```
+
+The second catch block is no longer reachable and needs to be moved above the catch-all `AmqpException` catch block.
+
+##### RabbitMQ 3.4 Compatibility
+
+Spring AMQP is now compatible with the RabbitMQ 3.4, including direct reply-to.
+See [Compatibility](#compatibility) and [RabbitMQ Direct reply-to](#direct-reply-to) for more information.
+
+##### `ContentTypeDelegatingMessageConverter` Added
+
+The `ContentTypeDelegatingMessageConverter` has been introduced to select the `MessageConverter` to use, based on the `contentType` property in the `MessageProperties`.
+See [Message Converters](#message-converters) for more information.
+
+#### A.2.10. Changes in 1.3 Since 1.2
+
+##### Listener Concurrency
+
+The listener container now supports dynamic scaling of the number of consumers based on workload, or you can programmatically change the concurrency without stopping the container.
+See [Listener Concurrency](#listener-concurrency).
+
+##### Listener Queues
+
+The listener container now permits the queues on which it listens to be modified at runtime.
+Also, the container now starts if at least one of its configured queues is available for use.
+See [Listener Container Queues](#listener-queues)
+
+This listener container now redeclares any auto-delete queues during startup.
+See [`auto-delete` Queues](#lc-auto-delete).
+
+##### Consumer Priority
+
+The listener container now supports consumer arguments, letting the `x-priority` argument be set.
+See [Consumer Priority](#consumer-priority).
+
+##### Exclusive Consumer
+
+You can now configure `SimpleMessageListenerContainer` with a single `exclusive` consumer, preventing other consumers from listening to the queue.
+See [Exclusive Consumer](#exclusive-consumer).
+
+##### Rabbit Admin
+
+You can now have the broker generate the queue name, regardless of `durable`, `autoDelete`, and `exclusive` settings.
+See [Configuring the Broker](#broker-configuration).
+
+##### Direct Exchange Binding
+
+Previously, omitting the `key` attribute from a `binding` element of a `direct-exchange` configuration caused the queue or exchange to be bound with an empty string as the routing key.
+Now it is bound with the the name of the provided `Queue` or `Exchange`.
+If you wish to bind with an empty string routing key, you need to specify `key=""`.
+
+##### `AmqpTemplate` Changes
+
+The `AmqpTemplate` now provides several synchronous `receiveAndReply` methods.
+These are implemented by the `RabbitTemplate`.
+For more information see [Receiving Messages](#receiving-messages).
+
+The `RabbitTemplate` now supports configuring a `RetryTemplate` to attempt retries (with optional back-off policy) for when the broker is not available.
+For more information see [Adding Retry Capabilities](#template-retry).
+
+##### Caching Connection Factory
+
+You can now configure the caching connection factory to cache `Connection` instances and their `Channel` instances instead of using a single connection and caching only `Channel` instances.
+See [Connection and Resource Management](#connections).
+
+##### Binding Arguments
+
+The `` of the `` now supports parsing of the `` sub-element.
+You can now configure the `` of the `` with a `key/value` attribute pair (to match on a single header) or with a `` sub-element (allowing matching on multiple headers).
+These options are mutually exclusive.
+See [Headers Exchange](#headers-exchange).
+
+##### Routing Connection Factory
+
+A new `SimpleRoutingConnectionFactory` has been introduced.
+It allows configuration of `ConnectionFactories` mapping, to determine the target `ConnectionFactory` to use at runtime.
+See [Routing Connection Factory](#routing-connection-factory).
+
+##### `MessageBuilder` and `MessagePropertiesBuilder`
+
+“Fluent APIs” for building messages or message properties are now provided.
+See [Message Builder API](#message-builder).
+
+##### `RetryInterceptorBuilder` Change
+
+A “Fluent API” for building listener container retry interceptors is now provided.
+See [Failures in Synchronous Operations and Options for Retry](#retry).
+
+##### `RepublishMessageRecoverer` Added
+
+This new `MessageRecoverer` is provided to allow publishing a failed message to another queue (including stack trace information in the header) when retries are exhausted.
+See [Message Listeners and the Asynchronous Case](#async-listeners).
+
+#####
+
+A default `ConditionalRejectingErrorHandler` has been added to the listener container.
+This error handler detects fatal message conversion problems and instructs the container to reject the message to prevent the broker from continually redelivering the unconvertible message.
+See [Exception Handling](#exception-handling).
+
+#####
+
+The `SimpleMessageListenerContainer` now has a property called `missingQueuesFatal` (default: `true`).
+Previously, missing queues were always fatal.
+See [Message Listener Container Configuration](#containerAttributes).
+
+#### A.2.11. Changes to 1.2 Since 1.1
+
+##### RabbitMQ Version
+
+Spring AMQP now uses RabbitMQ 3.1.x by default (but retains compatibility with earlier versions).
+Certain deprecations have been added for features no longer supported by RabbitMQ 3.1.x — federated exchanges and the `immediate` property on the `RabbitTemplate`.
+
+##### Rabbit Admin
+
+`RabbitAdmin` now provides an option to let exchange, queue, and binding declarations continue when a declaration fails.
+Previously, all declarations stopped on a failure.
+By setting `ignore-declaration-exceptions`, such exceptions are logged (at the `WARN` level), but further declarations continue.
+An example where this might be useful is when a queue declaration fails because of a slightly different `ttl` setting that would normally stop other declarations from proceeding.
+
+`RabbitAdmin` now provides an additional method called `getQueueProperties()`.
+You can use this determine if a queue exists on the broker (returns `null` for a non-existent queue).
+In addition, it returns the current number of messages in the queue as well as the current number of consumers.
+
+##### Rabbit Template
+
+Previously, when the `…sendAndReceive()` methods were used with a fixed reply queue, two custom headers were used for correlation data and to retain and restore reply queue information.
+With this release, the standard message property (`correlationId`) is used by default, although you can specify a custom property to use instead.
+In addition, nested `replyTo` information is now retained internally in the template, instead of using a custom header.
+
+The `immediate` property is deprecated.
+You must not set this property when using RabbitMQ 3.0.x or greater.
+
+##### JSON Message Converters
+
+A Jackson 2.x `MessageConverter` is now provided, along with the existing converter that uses Jackson 1.x.
+
+##### Automatic Declaration of Queues and Other Items
+
+Previously, when declaring queues, exchanges and bindings, you could not define which connection factory was used for the declarations.
+Each `RabbitAdmin` declared all components by using its connection.
+
+Starting with this release, you can now limit declarations to specific `RabbitAdmin` instances.
+See [Conditional Declaration](#conditional-declaration).
+
+##### AMQP Remoting
+
+Facilities are now provided for using Spring remoting techniques, using AMQP as the transport for the RPC calls.
+For more information see [Spring Remoting with AMQP](#remoting)
+
+##### Requested Heart Beats
+
+Several users have asked for the underlying client connection factory’s `requestedHeartBeats` property to be exposed on the Spring AMQP `CachingConnectionFactory`.
+This is now available.
+Previously, it was necessary to configure the AMQP client factory as a separate bean and provide a reference to it in the `CachingConnectionFactory`.
+
+#### A.2.12. Changes to 1.1 Since 1.0
+
+##### General
+
+Spring-AMQP is now built with Gradle.
+
+Adds support for publisher confirms and returns.
+
+Adds support for HA queues and broker failover.
+
+Adds support for dead letter exchanges and dead letter queues.
+
+##### AMQP Log4j Appender
+
+Adds an option to support adding a message ID to logged messages.
+
+Adds an option to allow the specification of a `Charset` name to be used when converting `String` to `byte[]`.
\ No newline at end of file
diff --git a/docs/en/spring-for-graphql/READEME.md b/docs/en/spring-batch/README.md
similarity index 100%
rename from docs/en/spring-for-graphql/READEME.md
rename to docs/en/spring-batch/README.md
diff --git a/docs/en/spring-batch/appendix.md b/docs/en/spring-batch/appendix.md
new file mode 100644
index 0000000000000000000000000000000000000000..10e84b5cc95996eff85bcf539d44c70e52435319
--- /dev/null
+++ b/docs/en/spring-batch/appendix.md
@@ -0,0 +1,48 @@
+## Appendix A: List of ItemReaders and ItemWriters
+
+### Item Readers
+
+| Item Reader | Description |
+|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+|AbstractItemCountingItemStreamItemReader| Abstract base class that provides basic restart capabilities by counting the number of items returned from an `ItemReader`. |
+| AggregateItemReader |An `ItemReader` that delivers a list as its item, storing up objects from the injected `ItemReader` until they are ready to be packed out as a collection. This class must be used as a wrapper for a custom `ItemReader` that can identify the record boundaries. The custom reader should mark the beginning and end of records by returning an `AggregateItem` which responds `true` to its query methods `isHeader()` and `isFooter()`. Note that this reader is not part of the library of readers provided by Spring Batch but given as a sample in `spring-batch-samples`.|
+| AmqpItemReader | Given a Spring `AmqpTemplate`, it provides synchronous receive methods. The `receiveAndConvert()` method lets you receive POJO objects. |
+| KafkaItemReader | An `ItemReader` that reads messages from an Apache Kafka topic. It can be configured to read messages from multiple partitions of the same topic. This reader stores message offsets in the execution context to support restart capabilities. |
+| FlatFileItemReader | Reads from a flat file. Includes `ItemStream`and `Skippable` functionality. See [`FlatFileItemReader`](readersAndWriters.html#flatFileItemReader). |
+| HibernateCursorItemReader | Reads from a cursor based on an HQL query. See[`Cursor-based ItemReaders`](readersAndWriters.html#cursorBasedItemReaders). |
+| HibernatePagingItemReader | Reads from a paginated HQL query |
+| ItemReaderAdapter | Adapts any class to the`ItemReader` interface. |
+| JdbcCursorItemReader | Reads from a database cursor via JDBC. See[`Cursor-based ItemReaders`](readersAndWriters.html#cursorBasedItemReaders). |
+| JdbcPagingItemReader | Given an SQL statement, pages through the rows, such that large datasets can be read without running out of memory. |
+| JmsItemReader | Given a Spring `JmsOperations` object and a JMS Destination or destination name to which to send errors, provides items received through the injected `JmsOperations#receive()`method. |
+| JpaPagingItemReader | Given a JPQL statement, pages through the rows, such that large datasets can be read without running out of memory. |
+| ListItemReader | Provides the items from a list, one at a time. |
+| MongoItemReader | Given a `MongoOperations` object and a JSON-based MongoDB query, provides items received from the `MongoOperations#find()` method. |
+| Neo4jItemReader | Given a `Neo4jOperations` object and the components of a Cyhper query, items are returned as the result of the Neo4jOperations.query method. |
+| RepositoryItemReader | Given a Spring Data `PagingAndSortingRepository` object, a `Sort`, and the name of method to execute, returns items provided by the Spring Data repository implementation. |
+| StoredProcedureItemReader | Reads from a database cursor resulting from the execution of a database stored procedure. See [`StoredProcedureItemReader`](readersAndWriters.html#StoredProcedureItemReader) |
+| StaxEventItemReader | Reads via StAX. see [`StaxEventItemReader`](readersAndWriters.html#StaxEventItemReader). |
+| JsonItemReader | Reads items from a Json document. see [`JsonItemReader`](readersAndWriters.html#JsonItemReader). |
+
+### Item Writers
+
+| Item Writer | Description |
+|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| AbstractItemStreamItemWriter | Abstract base class that combines the`ItemStream` and`ItemWriter` interfaces. |
+| AmqpItemWriter | Given a Spring `AmqpTemplate`, it provides for a synchronous `send` method. The `convertAndSend(Object)`method lets you send POJO objects. |
+| CompositeItemWriter | Passes an item to the `write` method of each in an injected `List` of `ItemWriter` objects. |
+| FlatFileItemWriter | Writes to a flat file. Includes `ItemStream` and Skippable functionality. See [`FlatFileItemWriter`](readersAndWriters.html#flatFileItemWriter). |
+| GemfireItemWriter | Using a `GemfireOperations` object, items are either written or removed from the Gemfire instance based on the configuration of the delete flag. |
+| HibernateItemWriter | This item writer is Hibernate-session aware and handles some transaction-related work that a non-"hibernate-aware" item writer would not need to know about and then delegates to another item writer to do the actual writing. |
+| ItemWriterAdapter | Adapts any class to the`ItemWriter` interface. |
+| JdbcBatchItemWriter | Uses batching features from a`PreparedStatement`, if available, and can take rudimentary steps to locate a failure during a`flush`. |
+| JmsItemWriter | Using a `JmsOperations` object, items are written to the default queue through the `JmsOperations#convertAndSend()` method. |
+| JpaItemWriter | This item writer is JPA EntityManager-aware and handles some transaction-related work that a non-"JPA-aware"`ItemWriter` would not need to know about and then delegates to another writer to do the actual writing. |
+| KafkaItemWriter |Using a `KafkaTemplate` object, items are written to the default topic through the`KafkaTemplate#sendDefault(Object, Object)` method using a `Converter` to map the key from the item. A delete flag can also be configured to send delete events to the topic.|
+| MimeMessageItemWriter | Using Spring’s `JavaMailSender`, items of type `MimeMessage`are sent as mail messages. |
+| MongoItemWriter | Given a `MongoOperations` object, items are written through the `MongoOperations.save(Object)` method. The actual write is delayed until the last possible moment before the transaction commits. |
+| Neo4jItemWriter | Given a `Neo4jOperations` object, items are persisted through the`save(Object)` method or deleted through the `delete(Object)` per the`ItemWriter’s` configuration |
+|PropertyExtractingDelegatingItemWriter| Extends `AbstractMethodInvokingDelegator`creating arguments on the fly. Arguments are created by retrieving the values from the fields in the item to be processed (through a`SpringBeanWrapper`), based on an injected array of field names. |
+| RepositoryItemWriter | Given a Spring Data `CrudRepository` implementation, items are saved through the method specified in the configuration. |
+| StaxEventItemWriter | Uses a `Marshaller` implementation to convert each item to XML and then writes it to an XML file using StAX. |
+| JsonFileItemWriter | Uses a `JsonObjectMarshaller` implementation to convert each item to Json and then writes it to an Json file.
\ No newline at end of file
diff --git a/docs/en/spring-batch/common-patterns.md b/docs/en/spring-batch/common-patterns.md
new file mode 100644
index 0000000000000000000000000000000000000000..12e0f17d23d4ff2b75bf2c0f548da8879c09bee7
--- /dev/null
+++ b/docs/en/spring-batch/common-patterns.md
@@ -0,0 +1,703 @@
+# Common Batch Patterns
+
+## Common Batch Patterns
+
+XMLJavaBoth
+
+Some batch jobs can be assembled purely from off-the-shelf components in Spring Batch.
+For instance, the `ItemReader` and `ItemWriter` implementations can be configured to
+cover a wide range of scenarios. However, for the majority of cases, custom code must be
+written. The main API entry points for application developers are the `Tasklet`, the`ItemReader`, the `ItemWriter`, and the various listener interfaces. Most simple batch
+jobs can use off-the-shelf input from a Spring Batch `ItemReader`, but it is often the
+case that there are custom concerns in the processing and writing that require developers
+to implement an `ItemWriter` or `ItemProcessor`.
+
+In this chapter, we provide a few examples of common patterns in custom business logic.
+These examples primarily feature the listener interfaces. It should be noted that an`ItemReader` or `ItemWriter` can implement a listener interface as well, if appropriate.
+
+### Logging Item Processing and Failures
+
+A common use case is the need for special handling of errors in a step, item by item,
+perhaps logging to a special channel or inserting a record into a database. A
+chunk-oriented `Step` (created from the step factory beans) lets users implement this use
+case with a simple `ItemReadListener` for errors on `read` and an `ItemWriteListener` for
+errors on `write`. The following code snippet illustrates a listener that logs both read
+and write failures:
+
+```
+public class ItemFailureLoggerListener extends ItemListenerSupport {
+
+ private static Log logger = LogFactory.getLog("item.error");
+
+ public void onReadError(Exception ex) {
+ logger.error("Encountered error on read", e);
+ }
+
+ public void onWriteError(Exception ex, List extends Object> items) {
+ logger.error("Encountered error on write", ex);
+ }
+}
+```
+
+Having implemented this listener, it must be registered with a step.
+
+The following example shows how to register a listener with a step in XML:
+
+XML Configuration
+
+```
+
+...
+
+
+
+
+
+
+```
+
+The following example shows how to register a listener with a step Java:
+
+Java Configuration
+
+```
+@Bean
+public Step simpleStep() {
+ return this.stepBuilderFactory.get("simpleStep")
+ ...
+ .listener(new ItemFailureLoggerListener())
+ .build();
+}
+```
+
+| |if your listener does anything in an `onError()` method, it must be inside a transaction that is going to be rolled back. If you need to use a transactional resource, such as a database, inside an `onError()` method, consider adding a declarative transaction to that method (see Spring Core Reference Guide for details), and giving its propagation attribute a value of `REQUIRES_NEW`.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+### Stopping a Job Manually for Business Reasons
+
+Spring Batch provides a `stop()` method through the `JobOperator` interface, but this is
+really for use by the operator rather than the application programmer. Sometimes, it is
+more convenient or makes more sense to stop a job execution from within the business
+logic.
+
+The simplest thing to do is to throw a `RuntimeException` (one that is neither retried
+indefinitely nor skipped). For example, a custom exception type could be used, as shown
+in the following example:
+
+```
+public class PoisonPillItemProcessor implements ItemProcessor {
+
+ @Override
+ public T process(T item) throws Exception {
+ if (isPoisonPill(item)) {
+ throw new PoisonPillException("Poison pill detected: " + item);
+ }
+ return item;
+ }
+}
+```
+
+Another simple way to stop a step from executing is to return `null` from the`ItemReader`, as shown in the following example:
+
+```
+public class EarlyCompletionItemReader implements ItemReader {
+
+ private ItemReader delegate;
+
+ public void setDelegate(ItemReader delegate) { ... }
+
+ public T read() throws Exception {
+ T item = delegate.read();
+ if (isEndItem(item)) {
+ return null; // end the step here
+ }
+ return item;
+ }
+
+}
+```
+
+The previous example actually relies on the fact that there is a default implementation
+of the `CompletionPolicy` strategy that signals a complete batch when the item to be
+processed is `null`. A more sophisticated completion policy could be implemented and
+injected into the `Step` through the `SimpleStepFactoryBean`.
+
+The following example shows how to inject a completion policy into a step in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+```
+
+The following example shows how to inject a completion policy into a step in Java:
+
+Java Configuration
+
+```
+@Bean
+public Step simpleStep() {
+ return this.stepBuilderFactory.get("simpleStep")
+ .chunk(new SpecialCompletionPolicy())
+ .reader(reader())
+ .writer(writer())
+ .build();
+}
+```
+
+An alternative is to set a flag in the `StepExecution`, which is checked by the `Step`implementations in the framework in between item processing. To implement this
+alternative, we need access to the current `StepExecution`, and this can be achieved by
+implementing a `StepListener` and registering it with the `Step`. The following example
+shows a listener that sets the flag:
+
+```
+public class CustomItemWriter extends ItemListenerSupport implements StepListener {
+
+ private StepExecution stepExecution;
+
+ public void beforeStep(StepExecution stepExecution) {
+ this.stepExecution = stepExecution;
+ }
+
+ public void afterRead(Object item) {
+ if (isPoisonPill(item)) {
+ stepExecution.setTerminateOnly();
+ }
+ }
+
+}
+```
+
+When the flag is set, the default behavior is for the step to throw a`JobInterruptedException`. This behavior can be controlled through the`StepInterruptionPolicy`. However, the only choice is to throw or not throw an exception,
+so this is always an abnormal ending to a job.
+
+### Adding a Footer Record
+
+Often, when writing to flat files, a “footer” record must be appended to the end of the
+file, after all processing has be completed. This can be achieved using the`FlatFileFooterCallback` interface provided by Spring Batch. The `FlatFileFooterCallback`(and its counterpart, the `FlatFileHeaderCallback`) are optional properties of the`FlatFileItemWriter` and can be added to an item writer.
+
+The following example shows how to use the `FlatFileHeaderCallback` and the`FlatFileFooterCallback` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+```
+
+The following example shows how to use the `FlatFileHeaderCallback` and the`FlatFileFooterCallback` in Java:
+
+Java Configuration
+
+```
+@Bean
+public FlatFileItemWriter itemWriter(Resource outputResource) {
+ return new FlatFileItemWriterBuilder()
+ .name("itemWriter")
+ .resource(outputResource)
+ .lineAggregator(lineAggregator())
+ .headerCallback(headerCallback())
+ .footerCallback(footerCallback())
+ .build();
+}
+```
+
+The footer callback interface has just one method that is called when the footer must be
+written, as shown in the following interface definition:
+
+```
+public interface FlatFileFooterCallback {
+
+ void writeFooter(Writer writer) throws IOException;
+
+}
+```
+
+#### Writing a Summary Footer
+
+A common requirement involving footer records is to aggregate information during the
+output process and to append this information to the end of the file. This footer often
+serves as a summarization of the file or provides a checksum.
+
+For example, if a batch job is writing `Trade` records to a flat file, and there is a
+requirement that the total amount from all the `Trades` is placed in a footer, then the
+following `ItemWriter` implementation can be used:
+
+```
+public class TradeItemWriter implements ItemWriter,
+ FlatFileFooterCallback {
+
+ private ItemWriter delegate;
+
+ private BigDecimal totalAmount = BigDecimal.ZERO;
+
+ public void write(List extends Trade> items) throws Exception {
+ BigDecimal chunkTotal = BigDecimal.ZERO;
+ for (Trade trade : items) {
+ chunkTotal = chunkTotal.add(trade.getAmount());
+ }
+
+ delegate.write(items);
+
+ // After successfully writing all items
+ totalAmount = totalAmount.add(chunkTotal);
+ }
+
+ public void writeFooter(Writer writer) throws IOException {
+ writer.write("Total Amount Processed: " + totalAmount);
+ }
+
+ public void setDelegate(ItemWriter delegate) {...}
+}
+```
+
+This `TradeItemWriter` stores a `totalAmount` value that is increased with the `amount`from each `Trade` item written. After the last `Trade` is processed, the framework calls`writeFooter`, which puts the `totalAmount` into the file. Note that the `write` method
+makes use of a temporary variable, `chunkTotal`, that stores the total of the`Trade` amounts in the chunk. This is done to ensure that, if a skip occurs in the`write` method, the `totalAmount` is left unchanged. It is only at the end of the `write`method, once we are guaranteed that no exceptions are thrown, that we update the`totalAmount`.
+
+In order for the `writeFooter` method to be called, the `TradeItemWriter` (which
+implements `FlatFileFooterCallback`) must be wired into the `FlatFileItemWriter` as the`footerCallback`.
+
+The following example shows how to wire the `TradeItemWriter` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to wire the `TradeItemWriter` in Java:
+
+Java Configuration
+
+```
+@Bean
+public TradeItemWriter tradeItemWriter() {
+ TradeItemWriter itemWriter = new TradeItemWriter();
+
+ itemWriter.setDelegate(flatFileItemWriter(null));
+
+ return itemWriter;
+}
+
+@Bean
+public FlatFileItemWriter flatFileItemWriter(Resource outputResource) {
+ return new FlatFileItemWriterBuilder()
+ .name("itemWriter")
+ .resource(outputResource)
+ .lineAggregator(lineAggregator())
+ .footerCallback(tradeItemWriter())
+ .build();
+}
+```
+
+The way that the `TradeItemWriter` has been written so far functions correctly only if
+the `Step` is not restartable. This is because the class is stateful (since it stores the`totalAmount`), but the `totalAmount` is not persisted to the database. Therefore, it
+cannot be retrieved in the event of a restart. In order to make this class restartable,
+the `ItemStream` interface should be implemented along with the methods `open` and`update`, as shown in the following example:
+
+```
+public void open(ExecutionContext executionContext) {
+ if (executionContext.containsKey("total.amount") {
+ totalAmount = (BigDecimal) executionContext.get("total.amount");
+ }
+}
+
+public void update(ExecutionContext executionContext) {
+ executionContext.put("total.amount", totalAmount);
+}
+```
+
+The update method stores the most current version of `totalAmount` to the`ExecutionContext` just before that object is persisted to the database. The open method
+retrieves any existing `totalAmount` from the `ExecutionContext` and uses it as the
+starting point for processing, allowing the `TradeItemWriter` to pick up on restart where
+it left off the previous time the `Step` was run.
+
+### Driving Query Based ItemReaders
+
+In the [chapter on readers and writers](readersAndWriters.html), database input using
+paging was discussed. Many database vendors, such as DB2, have extremely pessimistic
+locking strategies that can cause issues if the table being read also needs to be used by
+other portions of the online application. Furthermore, opening cursors over extremely
+large datasets can cause issues on databases from certain vendors. Therefore, many
+projects prefer to use a 'Driving Query' approach to reading in data. This approach works
+by iterating over keys, rather than the entire object that needs to be returned, as the
+following image illustrates:
+
+![Driving Query Job](https://docs.spring.io/spring-batch/docs/current/reference/html/images/drivingQueryExample.png)
+
+Figure 1. Driving Query Job
+
+As you can see, the example shown in the preceding image uses the same 'FOO' table as was
+used in the cursor-based example. However, rather than selecting the entire row, only the
+IDs were selected in the SQL statement. So, rather than a `FOO` object being returned
+from `read`, an `Integer` is returned. This number can then be used to query for the
+'details', which is a complete `Foo` object, as shown in the following image:
+
+![Driving Query Example](https://docs.spring.io/spring-batch/docs/current/reference/html/images/drivingQueryJob.png)
+
+Figure 2. Driving Query Example
+
+An `ItemProcessor` should be used to transform the key obtained from the driving query
+into a full `Foo` object. An existing DAO can be used to query for the full object based
+on the key.
+
+### Multi-Line Records
+
+While it is usually the case with flat files that each record is confined to a single
+line, it is common that a file might have records spanning multiple lines with multiple
+formats. The following excerpt from a file shows an example of such an arrangement:
+
+```
+HEA;0013100345;2007-02-15
+NCU;Smith;Peter;;T;20014539;F
+BAD;;Oak Street 31/A;;Small Town;00235;IL;US
+FOT;2;2;267.34
+```
+
+Everything between the line starting with 'HEA' and the line starting with 'FOT' is
+considered one record. There are a few considerations that must be made in order to
+handle this situation correctly:
+
+* Instead of reading one record at a time, the `ItemReader` must read every line of the
+ multi-line record as a group, so that it can be passed to the `ItemWriter` intact.
+
+* Each line type may need to be tokenized differently.
+
+Because a single record spans multiple lines and because we may not know how many lines
+there are, the `ItemReader` must be careful to always read an entire record. In order to
+do this, a custom `ItemReader` should be implemented as a wrapper for the`FlatFileItemReader`.
+
+The following example shows how to implement a custom `ItemReader` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to implement a custom `ItemReader` in Java:
+
+Java Configuration
+
+```
+@Bean
+public MultiLineTradeItemReader itemReader() {
+ MultiLineTradeItemReader itemReader = new MultiLineTradeItemReader();
+
+ itemReader.setDelegate(flatFileItemReader());
+
+ return itemReader;
+}
+
+@Bean
+public FlatFileItemReader flatFileItemReader() {
+ FlatFileItemReader reader = new FlatFileItemReaderBuilder<>()
+ .name("flatFileItemReader")
+ .resource(new ClassPathResource("data/iosample/input/multiLine.txt"))
+ .lineTokenizer(orderFileTokenizer())
+ .fieldSetMapper(orderFieldSetMapper())
+ .build();
+ return reader;
+}
+```
+
+To ensure that each line is tokenized properly, which is especially important for
+fixed-length input, the `PatternMatchingCompositeLineTokenizer` can be used on the
+delegate `FlatFileItemReader`. See[`FlatFileItemReader` in the Readers and
+Writers chapter](readersAndWriters.html#flatFileItemReader) for more details. The delegate reader then uses a`PassThroughFieldSetMapper` to deliver a `FieldSet` for each line back to the wrapping`ItemReader`.
+
+The following example shows how to ensure that each line is properly tokenized in XML:
+
+XML Content
+
+```
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to ensure that each line is properly tokenized in Java:
+
+Java Content
+
+```
+@Bean
+public PatternMatchingCompositeLineTokenizer orderFileTokenizer() {
+ PatternMatchingCompositeLineTokenizer tokenizer =
+ new PatternMatchingCompositeLineTokenizer();
+
+ Map tokenizers = new HashMap<>(4);
+
+ tokenizers.put("HEA*", headerRecordTokenizer());
+ tokenizers.put("FOT*", footerRecordTokenizer());
+ tokenizers.put("NCU*", customerLineTokenizer());
+ tokenizers.put("BAD*", billingAddressLineTokenizer());
+
+ tokenizer.setTokenizers(tokenizers);
+
+ return tokenizer;
+}
+```
+
+This wrapper has to be able to recognize the end of a record so that it can continually
+call `read()` on its delegate until the end is reached. For each line that is read, the
+wrapper should build up the item to be returned. Once the footer is reached, the item can
+be returned for delivery to the `ItemProcessor` and `ItemWriter`, as shown in the
+following example:
+
+```
+private FlatFileItemReader delegate;
+
+public Trade read() throws Exception {
+ Trade t = null;
+
+ for (FieldSet line = null; (line = this.delegate.read()) != null;) {
+ String prefix = line.readString(0);
+ if (prefix.equals("HEA")) {
+ t = new Trade(); // Record must start with header
+ }
+ else if (prefix.equals("NCU")) {
+ Assert.notNull(t, "No header was found.");
+ t.setLast(line.readString(1));
+ t.setFirst(line.readString(2));
+ ...
+ }
+ else if (prefix.equals("BAD")) {
+ Assert.notNull(t, "No header was found.");
+ t.setCity(line.readString(4));
+ t.setState(line.readString(6));
+ ...
+ }
+ else if (prefix.equals("FOT")) {
+ return t; // Record must end with footer
+ }
+ }
+ Assert.isNull(t, "No 'END' was found.");
+ return null;
+}
+```
+
+### Executing System Commands
+
+Many batch jobs require that an external command be called from within the batch job.
+Such a process could be kicked off separately by the scheduler, but the advantage of
+common metadata about the run would be lost. Furthermore, a multi-step job would also
+need to be split up into multiple jobs as well.
+
+Because the need is so common, Spring Batch provides a `Tasklet` implementation for
+calling system commands.
+
+The following example shows how to call an external command in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+```
+
+The following example shows how to call an external command in Java:
+
+Java Configuration
+
+```
+@Bean
+public SystemCommandTasklet tasklet() {
+ SystemCommandTasklet tasklet = new SystemCommandTasklet();
+
+ tasklet.setCommand("echo hello");
+ tasklet.setTimeout(5000);
+
+ return tasklet;
+}
+```
+
+### Handling Step Completion When No Input is Found
+
+In many batch scenarios, finding no rows in a database or file to process is not
+exceptional. The `Step` is simply considered to have found no work and completes with 0
+items read. All of the `ItemReader` implementations provided out of the box in Spring
+Batch default to this approach. This can lead to some confusion if nothing is written out
+even when input is present (which usually happens if a file was misnamed or some similar
+issue arises). For this reason, the metadata itself should be inspected to determine how
+much work the framework found to be processed. However, what if finding no input is
+considered exceptional? In this case, programmatically checking the metadata for no items
+processed and causing failure is the best solution. Because this is a common use case,
+Spring Batch provides a listener with exactly this functionality, as shown in
+the class definition for `NoWorkFoundStepExecutionListener`:
+
+```
+public class NoWorkFoundStepExecutionListener extends StepExecutionListenerSupport {
+
+ public ExitStatus afterStep(StepExecution stepExecution) {
+ if (stepExecution.getReadCount() == 0) {
+ return ExitStatus.FAILED;
+ }
+ return null;
+ }
+
+}
+```
+
+The preceding `StepExecutionListener` inspects the `readCount` property of the`StepExecution` during the 'afterStep' phase to determine if no items were read. If that
+is the case, an exit code `FAILED` is returned, indicating that the `Step` should fail.
+Otherwise, `null` is returned, which does not affect the status of the `Step`.
+
+### Passing Data to Future Steps
+
+It is often useful to pass information from one step to another. This can be done through
+the `ExecutionContext`. The catch is that there are two `ExecutionContexts`: one at the`Step` level and one at the `Job` level. The `Step` `ExecutionContext` remains only as
+long as the step, while the `Job` `ExecutionContext` remains through the whole `Job`. On
+the other hand, the `Step` `ExecutionContext` is updated every time the `Step` commits a
+chunk, while the `Job` `ExecutionContext` is updated only at the end of each `Step`.
+
+The consequence of this separation is that all data must be placed in the `Step``ExecutionContext` while the `Step` is executing. Doing so ensures that the data is
+stored properly while the `Step` runs. If data is stored to the `Job` `ExecutionContext`,
+then it is not persisted during `Step` execution. If the `Step` fails, that data is lost.
+
+```
+public class SavingItemWriter implements ItemWriter {
+ private StepExecution stepExecution;
+
+ public void write(List extends Object> items) throws Exception {
+ // ...
+
+ ExecutionContext stepContext = this.stepExecution.getExecutionContext();
+ stepContext.put("someKey", someObject);
+ }
+
+ @BeforeStep
+ public void saveStepExecution(StepExecution stepExecution) {
+ this.stepExecution = stepExecution;
+ }
+}
+```
+
+To make the data available to future `Steps`, it must be “promoted” to the `Job``ExecutionContext` after the step has finished. Spring Batch provides the`ExecutionContextPromotionListener` for this purpose. The listener must be configured
+with the keys related to the data in the `ExecutionContext` that must be promoted. It can
+also, optionally, be configured with a list of exit code patterns for which the promotion
+should occur (`COMPLETED` is the default). As with all listeners, it must be registered
+on the `Step`.
+
+The following example shows how to promote a step to the `Job` `ExecutionContext` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+ ...
+
+
+
+
+
+
+ someKey
+
+
+
+```
+
+The following example shows how to promote a step to the `Job` `ExecutionContext` in Java:
+
+Java Configuration
+
+```
+@Bean
+public Job job1() {
+ return this.jobBuilderFactory.get("job1")
+ .start(step1())
+ .next(step1())
+ .build();
+}
+
+@Bean
+public Step step1() {
+ return this.stepBuilderFactory.get("step1")
+ .chunk(10)
+ .reader(reader())
+ .writer(savingWriter())
+ .listener(promotionListener())
+ .build();
+}
+
+@Bean
+public ExecutionContextPromotionListener promotionListener() {
+ ExecutionContextPromotionListener listener = new ExecutionContextPromotionListener();
+
+ listener.setKeys(new String[] {"someKey"});
+
+ return listener;
+}
+```
+
+Finally, the saved values must be retrieved from the `Job` `ExecutionContext`, as shown
+in the following example:
+
+```
+public class RetrievingItemWriter implements ItemWriter {
+ private Object someObject;
+
+ public void write(List extends Object> items) throws Exception {
+ // ...
+ }
+
+ @BeforeStep
+ public void retrieveInterstepData(StepExecution stepExecution) {
+ JobExecution jobExecution = stepExecution.getJobExecution();
+ ExecutionContext jobContext = jobExecution.getExecutionContext();
+ this.someObject = jobContext.get("someKey");
+ }
+}
+```
\ No newline at end of file
diff --git a/docs/en/spring-batch/domain.md b/docs/en/spring-batch/domain.md
new file mode 100644
index 0000000000000000000000000000000000000000..a424d19cf8a333c5ec076481396d71aa0e5b39d8
--- /dev/null
+++ b/docs/en/spring-batch/domain.md
@@ -0,0 +1,434 @@
+# The Domain Language of Batch
+
+## The Domain Language of Batch
+
+XMLJavaBoth
+
+To any experienced batch architect, the overall concepts of batch processing used in
+Spring Batch should be familiar and comfortable. There are "Jobs" and "Steps" and
+developer-supplied processing units called `ItemReader` and `ItemWriter`. However,
+because of the Spring patterns, operations, templates, callbacks, and idioms, there are
+opportunities for the following:
+
+* Significant improvement in adherence to a clear separation of concerns.
+
+* Clearly delineated architectural layers and services provided as interfaces.
+
+* Simple and default implementations that allow for quick adoption and ease of use
+ out-of-the-box.
+
+* Significantly enhanced extensibility.
+
+The following diagram is a simplified version of the batch reference architecture that
+has been used for decades. It provides an overview of the components that make up the
+domain language of batch processing. This architecture framework is a blueprint that has
+been proven through decades of implementations on the last several generations of
+platforms (COBOL/Mainframe, C/Unix, and now Java/anywhere). JCL and COBOL developers
+are likely to be as comfortable with the concepts as C, C#, and Java developers. Spring
+Batch provides a physical implementation of the layers, components, and technical
+services commonly found in the robust, maintainable systems that are used to address the
+creation of simple to complex batch applications, with the infrastructure and extensions
+to address very complex processing needs.
+
+![Figure 2.1: Batch Stereotypes](https://docs.spring.io/spring-batch/docs/current/reference/html/images/spring-batch-reference-model.png)
+
+Figure 1. Batch Stereotypes
+
+The preceding diagram highlights the key concepts that make up the domain language of
+Spring Batch. A Job has one to many steps, each of which has exactly one `ItemReader`,
+one `ItemProcessor`, and one `ItemWriter`. A job needs to be launched (with`JobLauncher`), and metadata about the currently running process needs to be stored (in`JobRepository`).
+
+### Job
+
+This section describes stereotypes relating to the concept of a batch job. A `Job` is an
+entity that encapsulates an entire batch process. As is common with other Spring
+projects, a `Job` is wired together with either an XML configuration file or Java-based
+configuration. This configuration may be referred to as the "job configuration". However,`Job` is just the top of an overall hierarchy, as shown in the following diagram:
+
+![Job Hierarchy](https://docs.spring.io/spring-batch/docs/current/reference/html/images/job-heirarchy.png)
+
+Figure 2. Job Hierarchy
+
+In Spring Batch, a `Job` is simply a container for `Step` instances. It combines multiple
+steps that belong logically together in a flow and allows for configuration of properties
+global to all steps, such as restartability. The job configuration contains:
+
+* The simple name of the job.
+
+* Definition and ordering of `Step` instances.
+
+* Whether or not the job is restartable.
+
+For those who use Java configuration, Spring Batch provides a default implementation of
+the Job interface in the form of the `SimpleJob` class, which creates some standard
+functionality on top of `Job`. When using java based configuration, a collection of
+builders is made available for the instantiation of a `Job`, as shown in the following
+example:
+
+```
+@Bean
+public Job footballJob() {
+ return this.jobBuilderFactory.get("footballJob")
+ .start(playerLoad())
+ .next(gameLoad())
+ .next(playerSummarization())
+ .build();
+}
+```
+
+For those who use XML configuration, Spring Batch provides a default implementation of the`Job` interface in the form of the `SimpleJob` class, which creates some standard
+functionality on top of `Job`. However, the batch namespace abstracts away the need to
+instantiate it directly. Instead, the `` element can be used, as shown in the
+following example:
+
+```
+
+```
+
+#### JobInstance
+
+A `JobInstance` refers to the concept of a logical job run. Consider a batch job that
+should be run once at the end of the day, such as the 'EndOfDay' `Job` from the preceding
+diagram. There is one 'EndOfDay' job, but each individual run of the `Job` must be
+tracked separately. In the case of this job, there is one logical `JobInstance` per day.
+For example, there is a January 1st run, a January 2nd run, and so on. If the January 1st
+run fails the first time and is run again the next day, it is still the January 1st run.
+(Usually, this corresponds with the data it is processing as well, meaning the January
+1st run processes data for January 1st). Therefore, each `JobInstance` can have multiple
+executions (`JobExecution` is discussed in more detail later in this chapter), and only
+one `JobInstance` corresponding to a particular `Job` and identifying `JobParameters` can
+run at a given time.
+
+The definition of a `JobInstance` has absolutely no bearing on the data to be loaded.
+It is entirely up to the `ItemReader` implementation to determine how data is loaded. For
+example, in the EndOfDay scenario, there may be a column on the data that indicates the
+'effective date' or 'schedule date' to which the data belongs. So, the January 1st run
+would load only data from the 1st, and the January 2nd run would use only data from the
+2nd. Because this determination is likely to be a business decision, it is left up to the`ItemReader` to decide. However, using the same `JobInstance` determines whether or not
+the 'state' (that is, the `ExecutionContext`, which is discussed later in this chapter)
+from previous executions is used. Using a new `JobInstance` means 'start from the
+beginning', and using an existing instance generally means 'start from where you left
+off'.
+
+#### JobParameters
+
+Having discussed `JobInstance` and how it differs from Job, the natural question to ask
+is: "How is one `JobInstance` distinguished from another?" The answer is:`JobParameters`. A `JobParameters` object holds a set of parameters used to start a batch
+job. They can be used for identification or even as reference data during the run, as
+shown in the following image:
+
+![Job Parameters](https://docs.spring.io/spring-batch/docs/current/reference/html/images/job-stereotypes-parameters.png)
+
+Figure 3. Job Parameters
+
+In the preceding example, where there are two instances, one for January 1st, and another
+for January 2nd, there is really only one `Job`, but it has two `JobParameter` objects:
+one that was started with a job parameter of 01-01-2017 and another that was started with
+a parameter of 01-02-2017. Thus, the contract can be defined as: `JobInstance` = `Job`+ identifying `JobParameters`. This allows a developer to effectively control how a`JobInstance` is defined, since they control what parameters are passed in.
+
+| |Not all job parameters are required to contribute to the identification of a`JobInstance`. By default, they do so. However, the framework also allows the submission of a `Job` with parameters that do not contribute to the identity of a `JobInstance`.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### JobExecution
+
+A `JobExecution` refers to the technical concept of a single attempt to run a Job. An
+execution may end in failure or success, but the `JobInstance` corresponding to a given
+execution is not considered to be complete unless the execution completes successfully.
+Using the EndOfDay `Job` described previously as an example, consider a `JobInstance` for
+01-01-2017 that failed the first time it was run. If it is run again with the same
+identifying job parameters as the first run (01-01-2017), a new `JobExecution` is
+created. However, there is still only one `JobInstance`.
+
+A `Job` defines what a job is and how it is to be executed, and a `JobInstance` is a
+purely organizational object to group executions together, primarily to enable correct
+restart semantics. A `JobExecution`, however, is the primary storage mechanism for what
+actually happened during a run and contains many more properties that must be controlled
+and persisted, as shown in the following table:
+
+| Property | Definition |
+|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Status | A `BatchStatus` object that indicates the status of the execution. While running, it is`BatchStatus#STARTED`. If it fails, it is `BatchStatus#FAILED`. If it finishes successfully, it is `BatchStatus#COMPLETED` |
+| startTime | A `java.util.Date` representing the current system time when the execution was started. This field is empty if the job has yet to start. |
+| endTime | A `java.util.Date` representing the current system time when the execution finished, regardless of whether or not it was successful. The field is empty if the job has yet to finish. |
+| exitStatus | The `ExitStatus`, indicating the result of the run. It is most important, because it contains an exit code that is returned to the caller. See chapter 5 for more details. The field is empty if the job has yet to finish. |
+| createTime |A `java.util.Date` representing the current system time when the `JobExecution` was first persisted. The job may not have been started yet (and thus has no start time), but it always has a createTime, which is required by the framework for managing job level`ExecutionContexts`.|
+| lastUpdated | A `java.util.Date` representing the last time a `JobExecution` was persisted. This field is empty if the job has yet to start. |
+|executionContext | The "property bag" containing any user data that needs to be persisted between executions. |
+|failureExceptions| The list of exceptions encountered during the execution of a `Job`. These can be useful if more than one exception is encountered during the failure of a `Job`. |
+
+These properties are important because they are persisted and can be used to completely
+determine the status of an execution. For example, if the EndOfDay job for 01-01 is
+executed at 9:00 PM and fails at 9:30, the following entries are made in the batch
+metadata tables:
+
+|JOB\_INST\_ID| JOB\_NAME |
+|-------------|-----------|
+| 1 |EndOfDayJob|
+
+|JOB\_EXECUTION\_ID|TYPE\_CD| KEY\_NAME |DATE\_VAL |IDENTIFYING|
+|------------------|--------|-------------|----------|-----------|
+| 1 | DATE |schedule.Date|2017-01-01| TRUE |
+
+|JOB\_EXEC\_ID|JOB\_INST\_ID| START\_TIME | END\_TIME |STATUS|
+|-------------|-------------|----------------|----------------|------|
+| 1 | 1 |2017-01-01 21:00|2017-01-01 21:30|FAILED|
+
+| |Column names may have been abbreviated or removed for the sake of clarity and formatting.|
+|---|---------------------------------------------------------------------------------------------|
+
+Now that the job has failed, assume that it took the entire night for the problem to be
+determined, so that the 'batch window' is now closed. Further assuming that the window
+starts at 9:00 PM, the job is kicked off again for 01-01, starting where it left off and
+completing successfully at 9:30. Because it is now the next day, the 01-02 job must be
+run as well, and it is kicked off just afterwards at 9:31 and completes in its normal one
+hour time at 10:30. There is no requirement that one `JobInstance` be kicked off after
+another, unless there is potential for the two jobs to attempt to access the same data,
+causing issues with locking at the database level. It is entirely up to the scheduler to
+determine when a `Job` should be run. Since they are separate `JobInstances`, Spring
+Batch makes no attempt to stop them from being run concurrently. (Attempting to run the
+same `JobInstance` while another is already running results in a`JobExecutionAlreadyRunningException` being thrown). There should now be an extra entry
+in both the `JobInstance` and `JobParameters` tables and two extra entries in the`JobExecution` table, as shown in the following tables:
+
+|JOB\_INST\_ID| JOB\_NAME |
+|-------------|-----------|
+| 1 |EndOfDayJob|
+| 2 |EndOfDayJob|
+
+|JOB\_EXECUTION\_ID|TYPE\_CD| KEY\_NAME | DATE\_VAL |IDENTIFYING|
+|------------------|--------|-------------|-------------------|-----------|
+| 1 | DATE |schedule.Date|2017-01-01 00:00:00| TRUE |
+| 2 | DATE |schedule.Date|2017-01-01 00:00:00| TRUE |
+| 3 | DATE |schedule.Date|2017-01-02 00:00:00| TRUE |
+
+|JOB\_EXEC\_ID|JOB\_INST\_ID| START\_TIME | END\_TIME | STATUS |
+|-------------|-------------|----------------|----------------|---------|
+| 1 | 1 |2017-01-01 21:00|2017-01-01 21:30| FAILED |
+| 2 | 1 |2017-01-02 21:00|2017-01-02 21:30|COMPLETED|
+| 3 | 2 |2017-01-02 21:31|2017-01-02 22:29|COMPLETED|
+
+| |Column names may have been abbreviated or removed for the sake of clarity and formatting.|
+|---|---------------------------------------------------------------------------------------------|
+
+### Step
+
+A `Step` is a domain object that encapsulates an independent, sequential phase of a batch
+job. Therefore, every Job is composed entirely of one or more steps. A `Step` contains
+all of the information necessary to define and control the actual batch processing. This
+is a necessarily vague description because the contents of any given `Step` are at the
+discretion of the developer writing a `Job`. A `Step` can be as simple or complex as the
+developer desires. A simple `Step` might load data from a file into the database,
+requiring little or no code (depending upon the implementations used). A more complex`Step` may have complicated business rules that are applied as part of the processing. As
+with a `Job`, a `Step` has an individual `StepExecution` that correlates with a unique`JobExecution`, as shown in the following image:
+
+![Figure 2.1: Job Hierarchy With Steps](https://docs.spring.io/spring-batch/docs/current/reference/html/images/jobHeirarchyWithSteps.png)
+
+Figure 4. Job Hierarchy With Steps
+
+#### StepExecution
+
+A `StepExecution` represents a single attempt to execute a `Step`. A new `StepExecution`is created each time a `Step` is run, similar to `JobExecution`. However, if a step fails
+to execute because the step before it fails, no execution is persisted for it. A`StepExecution` is created only when its `Step` is actually started.
+
+`Step` executions are represented by objects of the `StepExecution` class. Each execution
+contains a reference to its corresponding step and `JobExecution` and transaction related
+data, such as commit and rollback counts and start and end times. Additionally, each step
+execution contains an `ExecutionContext`, which contains any data a developer needs to
+have persisted across batch runs, such as statistics or state information needed to
+restart. The following table lists the properties for `StepExecution`:
+
+| Property | Definition |
+|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Status |A `BatchStatus` object that indicates the status of the execution. While running, the status is `BatchStatus.STARTED`. If it fails, the status is `BatchStatus.FAILED`. If it finishes successfully, the status is `BatchStatus.COMPLETED`.|
+| startTime | A `java.util.Date` representing the current system time when the execution was started. This field is empty if the step has yet to start. |
+| endTime | A `java.util.Date` representing the current system time when the execution finished, regardless of whether or not it was successful. This field is empty if the step has yet to exit. |
+| exitStatus | The `ExitStatus` indicating the result of the execution. It is most important, because it contains an exit code that is returned to the caller. See chapter 5 for more details. This field is empty if the job has yet to exit. |
+|executionContext| The "property bag" containing any user data that needs to be persisted between executions. |
+| readCount | The number of items that have been successfully read. |
+| writeCount | The number of items that have been successfully written. |
+| commitCount | The number of transactions that have been committed for this execution. |
+| rollbackCount | The number of times the business transaction controlled by the `Step` has been rolled back. |
+| readSkipCount | The number of times `read` has failed, resulting in a skipped item. |
+|processSkipCount| The number of times `process` has failed, resulting in a skipped item. |
+| filterCount | The number of items that have been 'filtered' by the `ItemProcessor`. |
+| writeSkipCount | The number of times `write` has failed, resulting in a skipped item. |
+
+### ExecutionContext
+
+An `ExecutionContext` represents a collection of key/value pairs that are persisted and
+controlled by the framework in order to allow developers a place to store persistent
+state that is scoped to a `StepExecution` object or a `JobExecution` object. For those
+familiar with Quartz, it is very similar to JobDataMap. The best usage example is to
+facilitate restart. Using flat file input as an example, while processing individual
+lines, the framework periodically persists the `ExecutionContext` at commit points. Doing
+so allows the `ItemReader` to store its state in case a fatal error occurs during the run
+or even if the power goes out. All that is needed is to put the current number of lines
+read into the context, as shown in the following example, and the framework will do the
+rest:
+
+```
+executionContext.putLong(getKey(LINES_READ_COUNT), reader.getPosition());
+```
+
+Using the EndOfDay example from the `Job` Stereotypes section as an example, assume there
+is one step, 'loadData', that loads a file into the database. After the first failed run,
+the metadata tables would look like the following example:
+
+|JOB\_INST\_ID| JOB\_NAME |
+|-------------|-----------|
+| 1 |EndOfDayJob|
+
+|JOB\_INST\_ID|TYPE\_CD| KEY\_NAME |DATE\_VAL |
+|-------------|--------|-------------|----------|
+| 1 | DATE |schedule.Date|2017-01-01|
+
+|JOB\_EXEC\_ID|JOB\_INST\_ID| START\_TIME | END\_TIME |STATUS|
+|-------------|-------------|----------------|----------------|------|
+| 1 | 1 |2017-01-01 21:00|2017-01-01 21:30|FAILED|
+
+|STEP\_EXEC\_ID|JOB\_EXEC\_ID|STEP\_NAME| START\_TIME | END\_TIME |STATUS|
+|--------------|-------------|----------|----------------|----------------|------|
+| 1 | 1 | loadData |2017-01-01 21:00|2017-01-01 21:30|FAILED|
+
+|STEP\_EXEC\_ID| SHORT\_CONTEXT |
+|--------------|-------------------|
+| 1 |{piece.count=40321}|
+
+In the preceding case, the `Step` ran for 30 minutes and processed 40,321 'pieces', which
+would represent lines in a file in this scenario. This value is updated just before each
+commit by the framework and can contain multiple rows corresponding to entries within the`ExecutionContext`. Being notified before a commit requires one of the various`StepListener` implementations (or an `ItemStream`), which are discussed in more detail
+later in this guide. As with the previous example, it is assumed that the `Job` is
+restarted the next day. When it is restarted, the values from the `ExecutionContext` of
+the last run are reconstituted from the database. When the `ItemReader` is opened, it can
+check to see if it has any stored state in the context and initialize itself from there,
+as shown in the following example:
+
+```
+if (executionContext.containsKey(getKey(LINES_READ_COUNT))) {
+ log.debug("Initializing for restart. Restart data is: " + executionContext);
+
+ long lineCount = executionContext.getLong(getKey(LINES_READ_COUNT));
+
+ LineReader reader = getReader();
+
+ Object record = "";
+ while (reader.getPosition() < lineCount && record != null) {
+ record = readLine();
+ }
+}
+```
+
+In this case, after the above code runs, the current line is 40,322, allowing the `Step`to start again from where it left off. The `ExecutionContext` can also be used for
+statistics that need to be persisted about the run itself. For example, if a flat file
+contains orders for processing that exist across multiple lines, it may be necessary to
+store how many orders have been processed (which is much different from the number of
+lines read), so that an email can be sent at the end of the `Step` with the total number
+of orders processed in the body. The framework handles storing this for the developer, in
+order to correctly scope it with an individual `JobInstance`. It can be very difficult to
+know whether an existing `ExecutionContext` should be used or not. For example, using the
+'EndOfDay' example from above, when the 01-01 run starts again for the second time, the
+framework recognizes that it is the same `JobInstance` and on an individual `Step` basis,
+pulls the `ExecutionContext` out of the database, and hands it (as part of the`StepExecution`) to the `Step` itself. Conversely, for the 01-02 run, the framework
+recognizes that it is a different instance, so an empty context must be handed to the`Step`. There are many of these types of determinations that the framework makes for the
+developer, to ensure the state is given to them at the correct time. It is also important
+to note that exactly one `ExecutionContext` exists per `StepExecution` at any given time.
+Clients of the `ExecutionContext` should be careful, because this creates a shared
+keyspace. As a result, care should be taken when putting values in to ensure no data is
+overwritten. However, the `Step` stores absolutely no data in the context, so there is no
+way to adversely affect the framework.
+
+It is also important to note that there is at least one `ExecutionContext` per`JobExecution` and one for every `StepExecution`. For example, consider the following
+code snippet:
+
+```
+ExecutionContext ecStep = stepExecution.getExecutionContext();
+ExecutionContext ecJob = jobExecution.getExecutionContext();
+//ecStep does not equal ecJob
+```
+
+As noted in the comment, `ecStep` does not equal `ecJob`. They are two different`ExecutionContexts`. The one scoped to the `Step` is saved at every commit point in the`Step`, whereas the one scoped to the Job is saved in between every `Step` execution.
+
+### JobRepository
+
+`JobRepository` is the persistence mechanism for all of the Stereotypes mentioned above.
+It provides CRUD operations for `JobLauncher`, `Job`, and `Step` implementations. When a`Job` is first launched, a `JobExecution` is obtained from the repository, and, during
+the course of execution, `StepExecution` and `JobExecution` implementations are persisted
+by passing them to the repository.
+
+The Spring Batch XML namespace provides support for configuring a `JobRepository` instance
+with the `` tag, as shown in the following example:
+
+```
+
+```
+
+When using Java configuration, the `@EnableBatchProcessing` annotation provides a`JobRepository` as one of the components automatically configured out of the box.
+
+### JobLauncher
+
+`JobLauncher` represents a simple interface for launching a `Job` with a given set of`JobParameters`, as shown in the following example:
+
+```
+public interface JobLauncher {
+
+public JobExecution run(Job job, JobParameters jobParameters)
+ throws JobExecutionAlreadyRunningException, JobRestartException,
+ JobInstanceAlreadyCompleteException, JobParametersInvalidException;
+}
+```
+
+It is expected that implementations obtain a valid `JobExecution` from the`JobRepository` and execute the `Job`.
+
+### Item Reader
+
+`ItemReader` is an abstraction that represents the retrieval of input for a `Step`, one
+item at a time. When the `ItemReader` has exhausted the items it can provide, it
+indicates this by returning `null`. More details about the `ItemReader` interface and its
+various implementations can be found in[Readers And Writers](readersAndWriters.html#readersAndWriters).
+
+### Item Writer
+
+`ItemWriter` is an abstraction that represents the output of a `Step`, one batch or chunk
+of items at a time. Generally, an `ItemWriter` has no knowledge of the input it should
+receive next and knows only the item that was passed in its current invocation. More
+details about the `ItemWriter` interface and its various implementations can be found in[Readers And Writers](readersAndWriters.html#readersAndWriters).
+
+### Item Processor
+
+`ItemProcessor` is an abstraction that represents the business processing of an item.
+While the `ItemReader` reads one item, and the `ItemWriter` writes them, the`ItemProcessor` provides an access point to transform or apply other business processing.
+If, while processing the item, it is determined that the item is not valid, returning`null` indicates that the item should not be written out. More details about the`ItemProcessor` interface can be found in[Readers And Writers](readersAndWriters.html#readersAndWriters).
+
+### Batch Namespace
+
+Many of the domain concepts listed previously need to be configured in a Spring`ApplicationContext`. While there are implementations of the interfaces above that can be
+used in a standard bean definition, a namespace has been provided for ease of
+configuration, as shown in the following example:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+As long as the batch namespace has been declared, any of its elements can be used. More
+information on configuring a Job can be found in [Configuring and
+Running a Job](job.html#configureJob). More information on configuring a `Step` can be found in[Configuring a Step](step.html#configureStep).
\ No newline at end of file
diff --git a/docs/en/spring-batch/glossary.md b/docs/en/spring-batch/glossary.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f43da1bc4055539a564f5309dd9fc1d8749f638
--- /dev/null
+++ b/docs/en/spring-batch/glossary.md
@@ -0,0 +1,122 @@
+# Glossary
+
+## Appendix A: Glossary
+
+### Spring Batch Glossary
+
+Batch
+
+An accumulation of business transactions over time.
+
+Batch Application Style
+
+Term used to designate batch as an application style in its own right, similar to
+online, Web, or SOA. It has standard elements of input, validation, transformation of
+information to business model, business processing, and output. In addition, it
+requires monitoring at a macro level.
+
+Batch Processing
+
+The handling of a batch of many business transactions that have accumulated over a
+period of time (such as an hour, a day, a week, a month, or a year). It is the
+application of a process or set of processes to many data entities or objects in a
+repetitive and predictable fashion with either no manual element or a separate manual
+element for error processing.
+
+Batch Window
+
+The time frame within which a batch job must complete. This can be constrained by other
+systems coming online, other dependent jobs needing to execute, or other factors
+specific to the batch environment.
+
+Step
+
+The main batch task or unit of work. It initializes the business logic and controls the
+transaction environment, based on commit interval setting and other factors.
+
+Tasklet
+
+A component created by an application developer to process the business logic for a
+Step.
+
+Batch Job Type
+
+Job types describe application of jobs for particular types of processing. Common areas
+are interface processing (typically flat files), forms processing (either for online
+PDF generation or print formats), and report processing.
+
+Driving Query
+
+A driving query identifies the set of work for a job to do. The job then breaks that
+work into individual units of work. For instance, a driving query might be to identify
+all financial transactions that have a status of "pending transmission" and send them
+to a partner system. The driving query returns a set of record IDs to process. Each
+record ID then becomes a unit of work. A driving query may involve a join (if the
+criteria for selection falls across two or more tables) or it may work with a single
+table.
+
+Item
+
+An item represents the smallest amount of complete data for processing. In the simplest
+terms, this might be a line in a file, a row in a database table, or a particular
+element in an XML file.
+
+Logical Unit of Work (LUW)
+
+A batch job iterates through a driving query (or other input source, such as a file) to
+perform the set of work that the job must accomplish. Each iteration of work performed
+is a unit of work.
+
+Commit Interval
+
+A set of LUWs processed within a single transaction.
+
+Partitioning
+
+Splitting a job into multiple threads where each thread is responsible for a subset of
+the overall data to be processed. The threads of execution may be within the same JVM
+or they may span JVMs in a clustered environment that supports workload balancing.
+
+Staging Table
+
+A table that holds temporary data while it is being processed.
+
+Restartable
+
+A job that can be executed again and assumes the same identity as when run initially.
+In other words, it is has the same job instance ID.
+
+Rerunnable
+
+A job that is restartable and manages its own state in terms of the previous run’s
+record processing. An example of a rerunnable step is one based on a driving query. If
+the driving query can be formed so that it limits the processed rows when the job is
+restarted, then it is re-runnable. This is managed by the application logic. Often, a
+condition is added to the `where` statement to limit the rows returned by the driving
+query with logic resembling "and processedFlag!= true".
+
+Repeat
+
+One of the most basic units of batch processing, it defines by repeatability calling a
+portion of code until it is finished and while there is no error. Typically, a batch
+process would be repeatable as long as there is input.
+
+Retry
+
+Simplifies the execution of operations with retry semantics most frequently associated
+with handling transactional output exceptions. Retry is slightly different from repeat,
+rather than continually calling a block of code, retry is stateful and continually
+calls the same block of code with the same input, until it either succeeds or some type
+of retry limit has been exceeded. It is only generally useful when a subsequent
+invocation of the operation might succeed because something in the environment has
+improved.
+
+Recover
+
+Recover operations handle an exception in such a way that a repeat process is able to
+continue.
+
+Skip
+
+Skip is a recovery strategy often used on file input sources as the strategy for
+ignoring bad input records that failed validation.
\ No newline at end of file
diff --git a/docs/en/spring-batch/job.md b/docs/en/spring-batch/job.md
new file mode 100644
index 0000000000000000000000000000000000000000..ebc75c5b814d7c61153d09f216a370c387888796
--- /dev/null
+++ b/docs/en/spring-batch/job.md
@@ -0,0 +1,1357 @@
+# Configuring and Running a Job
+
+## Configuring and Running a Job
+
+XMLJavaBoth
+
+In the [domain section](domain.html#domainLanguageOfBatch) , the overall
+architecture design was discussed, using the following diagram as a
+guide:
+
+![Figure 2.1: Batch Stereotypes](https://docs.spring.io/spring-batch/docs/current/reference/html/images/spring-batch-reference-model.png)
+
+Figure 1. Batch Stereotypes
+
+While the `Job` object may seem like a simple
+container for steps, there are many configuration options of which a
+developer must be aware. Furthermore, there are many considerations for
+how a `Job` will be run and how its meta-data will be
+stored during that run. This chapter will explain the various configuration
+options and runtime concerns of a `Job`.
+
+### Configuring a Job
+
+There are multiple implementations of the [`Job`](#configureJob) interface. However,
+builders abstract away the difference in configuration.
+
+```
+@Bean
+public Job footballJob() {
+ return this.jobBuilderFactory.get("footballJob")
+ .start(playerLoad())
+ .next(gameLoad())
+ .next(playerSummarization())
+ .build();
+}
+```
+
+A `Job` (and typically any `Step` within it) requires a `JobRepository`. The
+configuration of the `JobRepository` is handled via the [`BatchConfigurer`](#javaConfig).
+
+The above example illustrates a `Job` that consists of three `Step` instances. The job related
+builders can also contain other elements that help with parallelisation (`Split`),
+declarative flow control (`Decision`) and externalization of flow definitions (`Flow`).
+
+Whether you use Java or XML, there are multiple implementations of the [`Job`](#configureJob)interface. However, the namespace abstracts away the differences in configuration. It has
+only three required dependencies: a name, `JobRepository` , and a list of `Step` instances.
+
+```
+
+```
+
+The examples here use a parent bean definition to create the steps.
+See the section on [step configuration](step.html#configureStep)for more options declaring specific step details inline. The XML namespace
+defaults to referencing a repository with an id of 'jobRepository', which
+is a sensible default. However, this can be overridden explicitly:
+
+```
+
+```
+
+In addition to steps a job configuration can contain other elements that help with
+parallelization (``), declarative flow control (``) and externalization
+of flow definitions (` `).
+
+#### Restartability
+
+One key issue when executing a batch job concerns the behavior of a `Job` when it is
+restarted. The launching of a `Job` is considered to be a 'restart' if a `JobExecution`already exists for the particular `JobInstance`. Ideally, all jobs should be able to start
+up where they left off, but there are scenarios where this is not possible. *It is
+entirely up to the developer to ensure that a new `JobInstance` is created in this
+scenario.* However, Spring Batch does provide some help. If a `Job` should never be
+restarted, but should always be run as part of a new `JobInstance`, then the
+restartable property may be set to 'false'.
+
+The following example shows how to set the `restartable` field to `false` in XML:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to set the `restartable` field to `false` in Java:
+
+Java Configuration
+
+```
+@Bean
+public Job footballJob() {
+ return this.jobBuilderFactory.get("footballJob")
+ .preventRestart()
+ ...
+ .build();
+}
+```
+
+To phrase it another way, setting restartable to false means “this`Job` does not support being started again”. Restarting a `Job` that is not
+restartable causes a `JobRestartException` to
+be thrown.
+
+```
+Job job = new SimpleJob();
+job.setRestartable(false);
+
+JobParameters jobParameters = new JobParameters();
+
+JobExecution firstExecution = jobRepository.createJobExecution(job, jobParameters);
+jobRepository.saveOrUpdate(firstExecution);
+
+try {
+ jobRepository.createJobExecution(job, jobParameters);
+ fail();
+}
+catch (JobRestartException e) {
+ // expected
+}
+```
+
+This snippet of JUnit code shows how attempting to create a`JobExecution` the first time for a non restartable
+job will cause no issues. However, the second
+attempt will throw a `JobRestartException`.
+
+#### Intercepting Job Execution
+
+During the course of the execution of a
+Job, it may be useful to be notified of various
+events in its lifecycle so that custom code may be executed. The`SimpleJob` allows for this by calling a`JobListener` at the appropriate time:
+
+```
+public interface JobExecutionListener {
+
+ void beforeJob(JobExecution jobExecution);
+
+ void afterJob(JobExecution jobExecution);
+
+}
+```
+
+`JobListeners` can be added to a `SimpleJob` by setting listeners on the job.
+
+The following example shows how to add a listener element to an XML job definition:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to add a listener method to a Java job definition:
+
+Java Configuration
+
+```
+@Bean
+public Job footballJob() {
+ return this.jobBuilderFactory.get("footballJob")
+ .listener(sampleListener())
+ ...
+ .build();
+}
+```
+
+It should be noted that the `afterJob` method is called regardless of the success or
+failure of the `Job`. If success or failure needs to be determined, it can be obtained
+from the `JobExecution`, as follows:
+
+```
+public void afterJob(JobExecution jobExecution){
+ if (jobExecution.getStatus() == BatchStatus.COMPLETED ) {
+ //job success
+ }
+ else if (jobExecution.getStatus() == BatchStatus.FAILED) {
+ //job failure
+ }
+}
+```
+
+The annotations corresponding to this interface are:
+
+* `@BeforeJob`
+
+* `@AfterJob`
+
+#### Inheriting from a Parent Job
+
+If a group of Jobs share similar, but not
+identical, configurations, then it may be helpful to define a "parent"`Job` from which the concrete
+Jobs may inherit properties. Similar to class
+inheritance in Java, the "child" `Job` will combine
+its elements and attributes with the parent’s.
+
+In the following example, "baseJob" is an abstract`Job` definition that defines only a list of
+listeners. The `Job` "job1" is a concrete
+definition that inherits the list of listeners from "baseJob" and merges
+it with its own list of listeners to produce a`Job` with two listeners and one`Step`, "step1".
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Please see the section on [Inheriting from a Parent Step](step.html#inheritingFromParentStep)for more detailed information.
+
+#### JobParametersValidator
+
+A job declared in the XML namespace or using any subclass of`AbstractJob` can optionally declare a validator for the job parameters at
+runtime. This is useful when for instance you need to assert that a job
+is started with all its mandatory parameters. There is a`DefaultJobParametersValidator` that can be used to constrain combinations
+of simple mandatory and optional parameters, and for more complex
+constraints you can implement the interface yourself.
+
+The configuration of a validator is supported through the XML namespace through a child
+element of the job, as shown in the following example:
+
+```
+
+
+
+
+```
+
+The validator can be specified as a reference (as shown earlier) or as a nested bean
+definition in the beans namespace.
+
+The configuration of a validator is supported through the java builders, as shown in the
+following example:
+
+```
+@Bean
+public Job job1() {
+ return this.jobBuilderFactory.get("job1")
+ .validator(parametersValidator())
+ ...
+ .build();
+}
+```
+
+### Java Config
+
+Spring 3 brought the ability to configure applications via java instead of XML. As of
+Spring Batch 2.2.0, batch jobs can be configured using the same java config.
+There are two components for the java based configuration: the `@EnableBatchProcessing`annotation and two builders.
+
+The `@EnableBatchProcessing` works similarly to the other @Enable\* annotations in the
+Spring family. In this case, `@EnableBatchProcessing` provides a base configuration for
+building batch jobs. Within this base configuration, an instance of `StepScope` is
+created in addition to a number of beans made available to be autowired:
+
+* `JobRepository`: bean name "jobRepository"
+
+* `JobLauncher`: bean name "jobLauncher"
+
+* `JobRegistry`: bean name "jobRegistry"
+
+* `PlatformTransactionManager`: bean name "transactionManager"
+
+* `JobBuilderFactory`: bean name "jobBuilders"
+
+* `StepBuilderFactory`: bean name "stepBuilders"
+
+The core interface for this configuration is the `BatchConfigurer`. The default
+implementation provides the beans mentioned above and requires a `DataSource` as a bean
+within the context to be provided. This data source is used by the JobRepository.
+You can customize any of these beans
+by creating a custom implementation of the `BatchConfigurer` interface.
+Typically, extending the `DefaultBatchConfigurer` (which is provided if a`BatchConfigurer` is not found) and overriding the required getter is sufficient.
+However, implementing your own from scratch may be required. The following
+example shows how to provide a custom transaction manager:
+
+```
+@Bean
+public BatchConfigurer batchConfigurer(DataSource dataSource) {
+ return new DefaultBatchConfigurer(dataSource) {
+ @Override
+ public PlatformTransactionManager getTransactionManager() {
+ return new MyTransactionManager();
+ }
+ };
+}
+```
+
+| |Only one configuration class needs to have the `@EnableBatchProcessing` annotation. Once you have a class annotated with it, you will have all of the above available.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+With the base configuration in place, a user can use the provided builder factories to
+configure a job. The following example shows a two step job configured with the`JobBuilderFactory` and the `StepBuilderFactory`:
+
+```
+@Configuration
+@EnableBatchProcessing
+@Import(DataSourceConfiguration.class)
+public class AppConfig {
+
+ @Autowired
+ private JobBuilderFactory jobs;
+
+ @Autowired
+ private StepBuilderFactory steps;
+
+ @Bean
+ public Job job(@Qualifier("step1") Step step1, @Qualifier("step2") Step step2) {
+ return jobs.get("myJob").start(step1).next(step2).build();
+ }
+
+ @Bean
+ protected Step step1(ItemReader reader,
+ ItemProcessor processor,
+ ItemWriter writer) {
+ return steps.get("step1")
+ . chunk(10)
+ .reader(reader)
+ .processor(processor)
+ .writer(writer)
+ .build();
+ }
+
+ @Bean
+ protected Step step2(Tasklet tasklet) {
+ return steps.get("step2")
+ .tasklet(tasklet)
+ .build();
+ }
+}
+```
+
+### Configuring a JobRepository
+
+When using `@EnableBatchProcessing`, a `JobRepository` is provided out of the box for you.
+This section addresses configuring your own.
+
+As described in earlier, the [`JobRepository`](#configureJob) is used for basic CRUD operations of the various persisted
+domain objects within Spring Batch, such as`JobExecution` and`StepExecution`. It is required by many of the major
+framework features, such as the `JobLauncher`,`Job`, and `Step`.
+
+The batch namespace abstracts away many of the implementation details of the`JobRepository` implementations and their collaborators. However, there are still a few
+configuration options available, as shown in the following example:
+
+XML Configuration
+
+```
+
+```
+
+None of the configuration options listed above are required except the `id`. If they are
+not set, the defaults shown above will be used. They are shown above for awareness
+purposes. The `max-varchar-length` defaults to 2500, which is the length of the long`VARCHAR` columns in the [sample schema
+scripts](schema-appendix.html#metaDataSchemaOverview).
+
+When using java configuration, a `JobRepository` is provided for you. A JDBC based one is
+provided out of the box if a `DataSource` is provided, the `Map` based one if not. However,
+you can customize the configuration of the `JobRepository` through an implementation of the`BatchConfigurer` interface.
+
+Java Configuration
+
+```
+...
+// This would reside in your BatchConfigurer implementation
+@Override
+protected JobRepository createJobRepository() throws Exception {
+ JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
+ factory.setDataSource(dataSource);
+ factory.setTransactionManager(transactionManager);
+ factory.setIsolationLevelForCreate("ISOLATION_SERIALIZABLE");
+ factory.setTablePrefix("BATCH_");
+ factory.setMaxVarCharLength(1000);
+ return factory.getObject();
+}
+...
+```
+
+None of the configuration options listed above are required except
+the dataSource and transactionManager. If they are not set, the defaults shown above
+will be used. They are shown above for awareness purposes. The
+max varchar length defaults to 2500, which is the
+length of the long `VARCHAR` columns in the[sample schema scripts](schema-appendix.html#metaDataSchemaOverview)
+
+#### Transaction Configuration for the JobRepository
+
+If the namespace or the provided `FactoryBean` is used, transactional advice is
+automatically created around the repository. This is to ensure that the batch meta-data,
+including state that is necessary for restarts after a failure, is persisted correctly.
+The behavior of the framework is not well defined if the repository methods are not
+transactional. The isolation level in the `create*` method attributes is specified
+separately to ensure that, when jobs are launched, if two processes try to launch
+the same job at the same time, only one succeeds. The default isolation level for that
+method is `SERIALIZABLE`, which is quite aggressive. `READ_COMMITTED` would work just as
+well. `READ_UNCOMMITTED` would be fine if two processes are not likely to collide in this
+way. However, since a call to the `create*` method is quite short, it is unlikely that`SERIALIZED` causes problems, as long as the database platform supports it. However, this
+can be overridden.
+
+The following example shows how to override the isolation level in XML:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to override the isolation level in Java:
+
+Java Configuration
+
+```
+// This would reside in your BatchConfigurer implementation
+@Override
+protected JobRepository createJobRepository() throws Exception {
+ JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
+ factory.setDataSource(dataSource);
+ factory.setTransactionManager(transactionManager);
+ factory.setIsolationLevelForCreate("ISOLATION_REPEATABLE_READ");
+ return factory.getObject();
+}
+```
+
+If the namespace or factory beans are not used, then it is also essential to configure the
+transactional behavior of the repository using AOP.
+
+The following example shows how to configure the transactional behavior of the repository
+in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+```
+
+The preceding fragment can be used nearly as is, with almost no changes. Remember also to
+include the appropriate namespace declarations and to make sure spring-tx and spring-aop
+(or the whole of Spring) are on the classpath.
+
+The following example shows how to configure the transactional behavior of the repository
+in Java:
+
+Java Configuration
+
+```
+@Bean
+public TransactionProxyFactoryBean baseProxy() {
+ TransactionProxyFactoryBean transactionProxyFactoryBean = new TransactionProxyFactoryBean();
+ Properties transactionAttributes = new Properties();
+ transactionAttributes.setProperty("*", "PROPAGATION_REQUIRED");
+ transactionProxyFactoryBean.setTransactionAttributes(transactionAttributes);
+ transactionProxyFactoryBean.setTarget(jobRepository());
+ transactionProxyFactoryBean.setTransactionManager(transactionManager());
+ return transactionProxyFactoryBean;
+}
+```
+
+#### Changing the Table Prefix
+
+Another modifiable property of the `JobRepository` is the table prefix of the meta-data
+tables. By default they are all prefaced with `BATCH_`. `BATCH_JOB_EXECUTION` and`BATCH_STEP_EXECUTION` are two examples. However, there are potential reasons to modify this
+prefix. If the schema names needs to be prepended to the table names, or if more than one
+set of meta data tables is needed within the same schema, then the table prefix needs to
+be changed:
+
+The following example shows how to change the table prefix in XML:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to change the table prefix in Java:
+
+Java Configuration
+
+```
+// This would reside in your BatchConfigurer implementation
+@Override
+protected JobRepository createJobRepository() throws Exception {
+ JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
+ factory.setDataSource(dataSource);
+ factory.setTransactionManager(transactionManager);
+ factory.setTablePrefix("SYSTEM.TEST_");
+ return factory.getObject();
+}
+```
+
+Given the preceding changes, every query to the meta-data tables is prefixed with`SYSTEM.TEST_`. `BATCH_JOB_EXECUTION` is referred to as SYSTEM.`TEST_JOB_EXECUTION`.
+
+| |Only the table prefix is configurable. The table and column names are not.|
+|---|--------------------------------------------------------------------------|
+
+#### In-Memory Repository
+
+There are scenarios in which you may not want to persist your domain objects to the
+database. One reason may be speed; storing domain objects at each commit point takes extra
+time. Another reason may be that you just don’t need to persist status for a particular
+job. For this reason, Spring batch provides an in-memory `Map` version of the job
+repository.
+
+The following example shows the inclusion of `MapJobRepositoryFactoryBean` in XML:
+
+XML Configuration
+
+```
+
+
+
+```
+
+The following example shows the inclusion of `MapJobRepositoryFactoryBean` in Java:
+
+Java Configuration
+
+```
+// This would reside in your BatchConfigurer implementation
+@Override
+protected JobRepository createJobRepository() throws Exception {
+ MapJobRepositoryFactoryBean factory = new MapJobRepositoryFactoryBean();
+ factory.setTransactionManager(transactionManager);
+ return factory.getObject();
+}
+```
+
+Note that the in-memory repository is volatile and so does not allow restart between JVM
+instances. It also cannot guarantee that two job instances with the same parameters are
+launched simultaneously, and is not suitable for use in a multi-threaded Job, or a locally
+partitioned `Step`. So use the database version of the repository wherever you need those
+features.
+
+However it does require a transaction manager to be defined because there are rollback
+semantics within the repository, and because the business logic might still be
+transactional (such as RDBMS access). For testing purposes many people find the`ResourcelessTransactionManager` useful.
+
+| |The `MapJobRepositoryFactoryBean` and related classes have been deprecated in v4 and are scheduled for removal in v5. If you want to use an in-memory job repository, you can use an embedded database like H2, Apache Derby or HSQLDB. There are several ways to create an embedded database and use it in your Spring Batch application. One way to do that is by using the APIs from [Spring JDBC](https://docs.spring.io/spring-framework/docs/current/reference/html/data-access.html#jdbc-embedded-database-support): ``` @Bean public DataSource dataSource() { return new EmbeddedDatabaseBuilder() .setType(EmbeddedDatabaseType.H2) .addScript("/org/springframework/batch/core/schema-drop-h2.sql") .addScript("/org/springframework/batch/core/schema-h2.sql") .build(); } ``` Once you have defined your embedded datasource as a bean in your application context, it should be picked up automatically if you use `@EnableBatchProcessing`. Otherwise you can configure it manually using the JDBC based `JobRepositoryFactoryBean` as shown in the [Configuring a JobRepository section](#configuringJobRepository).|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### Non-standard Database Types in a Repository
+
+If you are using a database platform that is not in the list of supported platforms, you
+may be able to use one of the supported types, if the SQL variant is close enough. To do
+this, you can use the raw `JobRepositoryFactoryBean` instead of the namespace shortcut and
+use it to set the database type to the closest match.
+
+The following example shows how to use `JobRepositoryFactoryBean` to set the database type
+to the closest match in XML:
+
+XML Configuration
+
+```
+
+
+
+
+```
+
+The following example shows how to use `JobRepositoryFactoryBean` to set the database type
+to the closest match in Java:
+
+Java Configuration
+
+```
+// This would reside in your BatchConfigurer implementation
+@Override
+protected JobRepository createJobRepository() throws Exception {
+ JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
+ factory.setDataSource(dataSource);
+ factory.setDatabaseType("db2");
+ factory.setTransactionManager(transactionManager);
+ return factory.getObject();
+}
+```
+
+(The `JobRepositoryFactoryBean` tries to
+auto-detect the database type from the `DataSource`if it is not specified.) The major differences between platforms are
+mainly accounted for by the strategy for incrementing primary keys, so
+often it might be necessary to override the`incrementerFactory` as well (using one of the standard
+implementations from the Spring Framework).
+
+If even that doesn’t work, or you are not using an RDBMS, then the
+only option may be to implement the various `Dao`interfaces that the `SimpleJobRepository` depends
+on and wire one up manually in the normal Spring way.
+
+### Configuring a JobLauncher
+
+When using `@EnableBatchProcessing`, a `JobRegistry` is provided out of the box for you.
+This section addresses configuring your own.
+
+The most basic implementation of the `JobLauncher` interface is the `SimpleJobLauncher`.
+Its only required dependency is a `JobRepository`, in order to obtain an execution.
+
+The following example shows a `SimpleJobLauncher` in XML:
+
+XML Configuration
+
+```
+
+
+
+```
+
+The following example shows a `SimpleJobLauncher` in Java:
+
+Java Configuration
+
+```
+...
+// This would reside in your BatchConfigurer implementation
+@Override
+protected JobLauncher createJobLauncher() throws Exception {
+ SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
+ jobLauncher.setJobRepository(jobRepository);
+ jobLauncher.afterPropertiesSet();
+ return jobLauncher;
+}
+...
+```
+
+Once a [JobExecution](domain.html#domainLanguageOfBatch) is obtained, it is passed to the
+execute method of `Job`, ultimately returning the `JobExecution` to the caller, as shown
+in the following image:
+
+![Job Launcher Sequence](https://docs.spring.io/spring-batch/docs/current/reference/html/images/job-launcher-sequence-sync.png)
+
+Figure 2. Job Launcher Sequence
+
+The sequence is straightforward and works well when launched from a scheduler. However,
+issues arise when trying to launch from an HTTP request. In this scenario, the launching
+needs to be done asynchronously so that the `SimpleJobLauncher` returns immediately to its
+caller. This is because it is not good practice to keep an HTTP request open for the
+amount of time needed by long running processes such as batch. The following image shows
+an example sequence:
+
+![Async Job Launcher Sequence](https://docs.spring.io/spring-batch/docs/current/reference/html/images/job-launcher-sequence-async.png)
+
+Figure 3. Asynchronous Job Launcher Sequence
+
+The `SimpleJobLauncher` can be configured to allow for this scenario by configuring a`TaskExecutor`.
+
+The following XML example shows a `SimpleJobLauncher` configured to return immediately:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+```
+
+The following Java example shows a `SimpleJobLauncher` configured to return immediately:
+
+Java Configuration
+
+```
+@Bean
+public JobLauncher jobLauncher() {
+ SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
+ jobLauncher.setJobRepository(jobRepository());
+ jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor());
+ jobLauncher.afterPropertiesSet();
+ return jobLauncher;
+}
+```
+
+Any implementation of the spring `TaskExecutor`interface can be used to control how jobs are asynchronously
+executed.
+
+### Running a Job
+
+At a minimum, launching a batch job requires two things: the`Job` to be launched and a`JobLauncher`. Both can be contained within the same
+context or different contexts. For example, if launching a job from the
+command line, a new JVM will be instantiated for each Job, and thus every
+job will have its own `JobLauncher`. However, if
+running from within a web container within the scope of an`HttpRequest`, there will usually be one`JobLauncher`, configured for asynchronous job
+launching, that multiple requests will invoke to launch their jobs.
+
+#### Running Jobs from the Command Line
+
+For users that want to run their jobs from an enterprise
+scheduler, the command line is the primary interface. This is because
+most schedulers (with the exception of Quartz unless using the
+NativeJob) work directly with operating system
+processes, primarily kicked off with shell scripts. There are many ways
+to launch a Java process besides a shell script, such as Perl, Ruby, or
+even 'build tools' such as ant or maven. However, because most people
+are familiar with shell scripts, this example will focus on them.
+
+##### The CommandLineJobRunner
+
+Because the script launching the job must kick off a Java
+Virtual Machine, there needs to be a class with a main method to act
+as the primary entry point. Spring Batch provides an implementation
+that serves just this purpose:`CommandLineJobRunner`. It’s important to note
+that this is just one way to bootstrap your application, but there are
+many ways to launch a Java process, and this class should in no way be
+viewed as definitive. The `CommandLineJobRunner`performs four tasks:
+
+* Load the appropriate`ApplicationContext`
+
+* Parse command line arguments into`JobParameters`
+
+* Locate the appropriate job based on arguments
+
+* Use the `JobLauncher` provided in the
+ application context to launch the job.
+
+All of these tasks are accomplished using only the arguments
+passed in. The following are required arguments:
+
+|jobPath|The location of the XML file that will be used to create an `ApplicationContext`. This file should contain everything needed to run the complete Job|
+|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|
+|jobName| The name of the job to be run. |
+
+These arguments must be passed in with the path first and the name second. All arguments
+after these are considered to be job parameters, are turned into a JobParameters object,
+and must be in the format of 'name=value'.
+
+The following example shows a date passed as a job parameter to a job defied in XML:
+
+```
+ key/value pairs to identifying job parameters. However, it is possible to explicitly specify which job parameters are identifying and which are not by prefixing them with `+` or `-` respectively. In the following example, `schedule.date` is an identifying job parameter while `vendor.id` is not: ``` +schedule.date(date)=2007/05/05 -vendor.id=123 ``` ``` +schedule.date(date)=2007/05/05 -vendor.id=123 ``` This behaviour can be overridden by using a custom `JobParametersConverter`.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+In most cases, you would want to use a manifest to declare your main class in a jar, but,
+for simplicity, the class was used directly. This example is using the same 'EndOfDay'
+example from the [domainLanguageOfBatch](domain.html#domainLanguageOfBatch). The first
+argument is 'endOfDayJob.xml', which is the Spring ApplicationContext containing the`Job`. The second argument, 'endOfDay' represents the job name. The final argument,
+'schedule.date(date)=2007/05/05', is converted into a JobParameters object.
+
+The following example shows a sample configuration for `endOfDay` in XML:
+
+```
+
+
+
+
+
+
+```
+
+In most cases you would want to use a manifest to declare your main class in a jar, but,
+for simplicity, the class was used directly. This example is using the same 'EndOfDay'
+example from the [domainLanguageOfBatch](domain.html#domainLanguageOfBatch). The first
+argument is 'io.spring.EndOfDayJobConfiguration', which is the fully qualified class name
+to the configuration class containing the Job. The second argument, 'endOfDay' represents
+the job name. The final argument, 'schedule.date(date)=2007/05/05' is converted into a`JobParameters` object. An example of the java configuration follows:
+
+The following example shows a sample configuration for `endOfDay` in Java:
+
+```
+@Configuration
+@EnableBatchProcessing
+public class EndOfDayJobConfiguration {
+
+ @Autowired
+ private JobBuilderFactory jobBuilderFactory;
+
+ @Autowired
+ private StepBuilderFactory stepBuilderFactory;
+
+ @Bean
+ public Job endOfDay() {
+ return this.jobBuilderFactory.get("endOfDay")
+ .start(step1())
+ .build();
+ }
+
+ @Bean
+ public Step step1() {
+ return this.stepBuilderFactory.get("step1")
+ .tasklet((contribution, chunkContext) -> null)
+ .build();
+ }
+}
+```
+
+The preceding example is overly simplistic, since there are many more requirements to a
+run a batch job in Spring Batch in general, but it serves to show the two main
+requirements of the `CommandLineJobRunner`: `Job` and `JobLauncher`.
+
+##### ExitCodes
+
+When launching a batch job from the command-line, an enterprise
+scheduler is often used. Most schedulers are fairly dumb and work only
+at the process level. This means that they only know about some
+operating system process such as a shell script that they’re invoking.
+In this scenario, the only way to communicate back to the scheduler
+about the success or failure of a job is through return codes. A
+return code is a number that is returned to a scheduler by the process
+that indicates the result of the run. In the simplest case: 0 is
+success and 1 is failure. However, there may be more complex
+scenarios: If job A returns 4 kick off job B, and if it returns 5 kick
+off job C. This type of behavior is configured at the scheduler level,
+but it is important that a processing framework such as Spring Batch
+provide a way to return a numeric representation of the 'Exit Code'
+for a particular batch job. In Spring Batch this is encapsulated
+within an `ExitStatus`, which is covered in more
+detail in Chapter 5. For the purposes of discussing exit codes, the
+only important thing to know is that an`ExitStatus` has an exit code property that is
+set by the framework (or the developer) and is returned as part of the`JobExecution` returned from the`JobLauncher`. The`CommandLineJobRunner` converts this string value
+to a number using the `ExitCodeMapper`interface:
+
+```
+public interface ExitCodeMapper {
+
+ public int intValue(String exitCode);
+
+}
+```
+
+The essential contract of an`ExitCodeMapper` is that, given a string exit
+code, a number representation will be returned. The default implementation used by the job runner is the `SimpleJvmExitCodeMapper`that returns 0 for completion, 1 for generic errors, and 2 for any job
+runner errors such as not being able to find a`Job` in the provided context. If anything more
+complex than the 3 values above is needed, then a custom
+implementation of the `ExitCodeMapper` interface
+must be supplied. Because the`CommandLineJobRunner` is the class that creates
+an `ApplicationContext`, and thus cannot be
+'wired together', any values that need to be overwritten must be
+autowired. This means that if an implementation of`ExitCodeMapper` is found within the `BeanFactory`,
+it will be injected into the runner after the context is created. All
+that needs to be done to provide your own`ExitCodeMapper` is to declare the implementation
+as a root level bean and ensure that it is part of the`ApplicationContext` that is loaded by the
+runner.
+
+#### Running Jobs from within a Web Container
+
+Historically, offline processing such as batch jobs have been
+launched from the command-line, as described above. However, there are
+many cases where launching from an `HttpRequest` is
+a better option. Many such use cases include reporting, ad-hoc job
+running, and web application support. Because a batch job by definition
+is long running, the most important concern is ensuring to launch the
+job asynchronously:
+
+![Async Job Launcher Sequence from web container](https://docs.spring.io/spring-batch/docs/current/reference/html/images/launch-from-request.png)
+
+Figure 4. Asynchronous Job Launcher Sequence From Web Container
+
+The controller in this case is a Spring MVC controller. More
+information on Spring MVC can be found here: .
+The controller launches a `Job` using a`JobLauncher` that has been configured to launch[asynchronously](#runningJobsFromWebContainer), which
+immediately returns a `JobExecution`. The`Job` will likely still be running, however, this
+nonblocking behaviour allows the controller to return immediately, which
+is required when handling an `HttpRequest`. An
+example is below:
+
+```
+@Controller
+public class JobLauncherController {
+
+ @Autowired
+ JobLauncher jobLauncher;
+
+ @Autowired
+ Job job;
+
+ @RequestMapping("/jobLauncher.html")
+ public void handle() throws Exception{
+ jobLauncher.run(job, new JobParameters());
+ }
+}
+```
+
+### Advanced Meta-Data Usage
+
+So far, both the `JobLauncher` and `JobRepository` interfaces have been
+discussed. Together, they represent simple launching of a job, and basic
+CRUD operations of batch domain objects:
+
+![Job Repository](https://docs.spring.io/spring-batch/docs/current/reference/html/images/job-repository.png)
+
+Figure 5. Job Repository
+
+A `JobLauncher` uses the`JobRepository` to create new`JobExecution` objects and run them.`Job` and `Step` implementations
+later use the same `JobRepository` for basic updates
+of the same executions during the running of a Job.
+The basic operations suffice for simple scenarios, but in a large batch
+environment with hundreds of batch jobs and complex scheduling
+requirements, more advanced access of the meta data is required:
+
+![Job Repository Advanced](https://docs.spring.io/spring-batch/docs/current/reference/html/images/job-repository-advanced.png)
+
+Figure 6. Advanced Job Repository Access
+
+The `JobExplorer` and`JobOperator` interfaces, which will be discussed
+below, add additional functionality for querying and controlling the meta
+data.
+
+#### Querying the Repository
+
+The most basic need before any advanced features is the ability to
+query the repository for existing executions. This functionality is
+provided by the `JobExplorer` interface:
+
+```
+public interface JobExplorer {
+
+ List getJobInstances(String jobName, int start, int count);
+
+ JobExecution getJobExecution(Long executionId);
+
+ StepExecution getStepExecution(Long jobExecutionId, Long stepExecutionId);
+
+ JobInstance getJobInstance(Long instanceId);
+
+ List getJobExecutions(JobInstance jobInstance);
+
+ Set findRunningJobExecutions(String jobName);
+}
+```
+
+As is evident from the method signatures above, `JobExplorer` is a read-only version of
+the `JobRepository`, and, like the `JobRepository`, it can be easily configured by using a
+factory bean:
+
+The following example shows how to configure a `JobExplorer` in XML:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to configure a `JobExplorer` in Java:
+
+Java Configuration
+
+```
+...
+// This would reside in your BatchConfigurer implementation
+@Override
+public JobExplorer getJobExplorer() throws Exception {
+ JobExplorerFactoryBean factoryBean = new JobExplorerFactoryBean();
+ factoryBean.setDataSource(this.dataSource);
+ return factoryBean.getObject();
+}
+...
+```
+
+[Earlier in this chapter](#repositoryTablePrefix), we noted that the table prefix
+of the `JobRepository` can be modified to allow for different versions or schemas. Because
+the `JobExplorer` works with the same tables, it too needs the ability to set a prefix.
+
+The following example shows how to set the table prefix for a `JobExplorer` in XML:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to set the table prefix for a `JobExplorer` in Java:
+
+Java Configuration
+
+```
+...
+// This would reside in your BatchConfigurer implementation
+@Override
+public JobExplorer getJobExplorer() throws Exception {
+ JobExplorerFactoryBean factoryBean = new JobExplorerFactoryBean();
+ factoryBean.setDataSource(this.dataSource);
+ factoryBean.setTablePrefix("SYSTEM.");
+ return factoryBean.getObject();
+}
+...
+```
+
+#### JobRegistry
+
+A `JobRegistry` (and its parent interface `JobLocator`) is not mandatory, but it can be
+useful if you want to keep track of which jobs are available in the context. It is also
+useful for collecting jobs centrally in an application context when they have been created
+elsewhere (for example, in child contexts). Custom `JobRegistry` implementations can also
+be used to manipulate the names and other properties of the jobs that are registered.
+There is only one implementation provided by the framework and this is based on a simple
+map from job name to job instance.
+
+The following example shows how to include a `JobRegistry` for a job defined in XML:
+
+```
+
+```
+
+The following example shows how to include a `JobRegistry` for a job defined in Java:
+
+When using `@EnableBatchProcessing`, a `JobRegistry` is provided out of the box for you.
+If you want to configure your own:
+
+```
+...
+// This is already provided via the @EnableBatchProcessing but can be customized via
+// overriding the getter in the SimpleBatchConfiguration
+@Override
+@Bean
+public JobRegistry jobRegistry() throws Exception {
+ return new MapJobRegistry();
+}
+...
+```
+
+There are two ways to populate a `JobRegistry` automatically: using
+a bean post processor and using a registrar lifecycle component. These
+two mechanisms are described in the following sections.
+
+##### JobRegistryBeanPostProcessor
+
+This is a bean post-processor that can register all jobs as they are created.
+
+The following example shows how to include the `JobRegistryBeanPostProcessor` for a job
+defined in XML:
+
+XML Configuration
+
+```
+
+
+
+```
+
+The following example shows how to include the `JobRegistryBeanPostProcessor` for a job
+defined in Java:
+
+Java Configuration
+
+```
+@Bean
+public JobRegistryBeanPostProcessor jobRegistryBeanPostProcessor() {
+ JobRegistryBeanPostProcessor postProcessor = new JobRegistryBeanPostProcessor();
+ postProcessor.setJobRegistry(jobRegistry());
+ return postProcessor;
+}
+```
+
+Although it is not strictly necessary, the post-processor in the
+example has been given an id so that it can be included in child
+contexts (e.g. as a parent bean definition) and cause all jobs created
+there to also be registered automatically.
+
+##### `AutomaticJobRegistrar`
+
+This is a lifecycle component that creates child contexts and registers jobs from those
+contexts as they are created. One advantage of doing this is that, while the job names in
+the child contexts still have to be globally unique in the registry, their dependencies
+can have "natural" names. So for example, you can create a set of XML configuration files
+each having only one Job, but all having different definitions of an `ItemReader` with the
+same bean name, such as "reader". If all those files were imported into the same context,
+the reader definitions would clash and override one another, but with the automatic
+registrar this is avoided. This makes it easier to integrate jobs contributed from
+separate modules of an application.
+
+The following example shows how to include the `AutomaticJobRegistrar` for a job defined
+in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to include the `AutomaticJobRegistrar` for a job defined
+in Java:
+
+Java Configuration
+
+```
+@Bean
+public AutomaticJobRegistrar registrar() {
+
+ AutomaticJobRegistrar registrar = new AutomaticJobRegistrar();
+ registrar.setJobLoader(jobLoader());
+ registrar.setApplicationContextFactories(applicationContextFactories());
+ registrar.afterPropertiesSet();
+ return registrar;
+
+}
+```
+
+The registrar has two mandatory properties, one is an array of`ApplicationContextFactory` (here created from a
+convenient factory bean), and the other is a`JobLoader`. The `JobLoader`is responsible for managing the lifecycle of the child contexts and
+registering jobs in the `JobRegistry`.
+
+The `ApplicationContextFactory` is
+responsible for creating the child context and the most common usage
+would be as above using a`ClassPathXmlApplicationContextFactory`. One of
+the features of this factory is that by default it copies some of the
+configuration down from the parent context to the child. So for
+instance you don’t have to re-define the`PropertyPlaceholderConfigurer` or AOP
+configuration in the child, if it should be the same as the
+parent.
+
+The `AutomaticJobRegistrar` can be used in
+conjunction with a `JobRegistryBeanPostProcessor`if desired (as long as the `DefaultJobLoader` is
+used as well). For instance this might be desirable if there are jobs
+defined in the main parent context as well as in the child
+locations.
+
+#### JobOperator
+
+As previously discussed, the `JobRepository`provides CRUD operations on the meta-data, and the`JobExplorer` provides read-only operations on the
+meta-data. However, those operations are most useful when used together
+to perform common monitoring tasks such as stopping, restarting, or
+summarizing a Job, as is commonly done by batch operators. Spring Batch
+provides these types of operations via the`JobOperator` interface:
+
+```
+public interface JobOperator {
+
+ List getExecutions(long instanceId) throws NoSuchJobInstanceException;
+
+ List getJobInstances(String jobName, int start, int count)
+ throws NoSuchJobException;
+
+ Set getRunningExecutions(String jobName) throws NoSuchJobException;
+
+ String getParameters(long executionId) throws NoSuchJobExecutionException;
+
+ Long start(String jobName, String parameters)
+ throws NoSuchJobException, JobInstanceAlreadyExistsException;
+
+ Long restart(long executionId)
+ throws JobInstanceAlreadyCompleteException, NoSuchJobExecutionException,
+ NoSuchJobException, JobRestartException;
+
+ Long startNextInstance(String jobName)
+ throws NoSuchJobException, JobParametersNotFoundException, JobRestartException,
+ JobExecutionAlreadyRunningException, JobInstanceAlreadyCompleteException;
+
+ boolean stop(long executionId)
+ throws NoSuchJobExecutionException, JobExecutionNotRunningException;
+
+ String getSummary(long executionId) throws NoSuchJobExecutionException;
+
+ Map getStepExecutionSummaries(long executionId)
+ throws NoSuchJobExecutionException;
+
+ Set getJobNames();
+
+}
+```
+
+The above operations represent methods from many different interfaces, such as`JobLauncher`, `JobRepository`, `JobExplorer`, and `JobRegistry`. For this reason, the
+provided implementation of `JobOperator`, `SimpleJobOperator`, has many dependencies.
+
+The following example shows a typical bean definition for `SimpleJobOperator` in XML:
+
+```
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows a typical bean definition for `SimpleJobOperator` in Java:
+
+```
+ /**
+ * All injected dependencies for this bean are provided by the @EnableBatchProcessing
+ * infrastructure out of the box.
+ */
+ @Bean
+ public SimpleJobOperator jobOperator(JobExplorer jobExplorer,
+ JobRepository jobRepository,
+ JobRegistry jobRegistry) {
+
+ SimpleJobOperator jobOperator = new SimpleJobOperator();
+
+ jobOperator.setJobExplorer(jobExplorer);
+ jobOperator.setJobRepository(jobRepository);
+ jobOperator.setJobRegistry(jobRegistry);
+ jobOperator.setJobLauncher(jobLauncher);
+
+ return jobOperator;
+ }
+```
+
+| |If you set the table prefix on the job repository, don’t forget to set it on the job explorer as well.|
+|---|------------------------------------------------------------------------------------------------------|
+
+#### JobParametersIncrementer
+
+Most of the methods on `JobOperator` are
+self-explanatory, and more detailed explanations can be found on the[javadoc of the interface](https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/core/launch/JobOperator.html). However, the`startNextInstance` method is worth noting. This
+method will always start a new instance of a Job.
+This can be extremely useful if there are serious issues in a`JobExecution` and the Job
+needs to be started over again from the beginning. Unlike`JobLauncher` though, which requires a new`JobParameters` object that will trigger a new`JobInstance` if the parameters are different from
+any previous set of parameters, the`startNextInstance` method will use the`JobParametersIncrementer` tied to the`Job` to force the `Job` to a
+new instance:
+
+```
+public interface JobParametersIncrementer {
+
+ JobParameters getNext(JobParameters parameters);
+
+}
+```
+
+The contract of `JobParametersIncrementer` is
+that, given a [JobParameters](#jobParameters)object, it will return the 'next' JobParameters
+object by incrementing any necessary values it may contain. This
+strategy is useful because the framework has no way of knowing what
+changes to the `JobParameters` make it the 'next'
+instance. For example, if the only value in`JobParameters` is a date, and the next instance
+should be created, should that value be incremented by one day? Or one
+week (if the job is weekly for instance)? The same can be said for any
+numerical values that help to identify the Job,
+as shown below:
+
+```
+public class SampleIncrementer implements JobParametersIncrementer {
+
+ public JobParameters getNext(JobParameters parameters) {
+ if (parameters==null || parameters.isEmpty()) {
+ return new JobParametersBuilder().addLong("run.id", 1L).toJobParameters();
+ }
+ long id = parameters.getLong("run.id",1L) + 1;
+ return new JobParametersBuilder().addLong("run.id", id).toJobParameters();
+ }
+}
+```
+
+In this example, the value with a key of 'run.id' is used to
+discriminate between `JobInstances`. If the`JobParameters` passed in is null, it can be
+assumed that the `Job` has never been run before
+and thus its initial state can be returned. However, if not, the old
+value is obtained, incremented by one, and returned.
+
+For jobs defined in XML, an incrementer can be associated with `Job` through the
+'incrementer' attribute in the namespace, as follows:
+
+```
+
+```
+
+For jobs defined in Java, an incrementer can be associated with a 'Job' through the`incrementer` method provided in the builders, as follows:
+
+```
+@Bean
+public Job footballJob() {
+ return this.jobBuilderFactory.get("footballJob")
+ .incrementer(sampleIncrementer())
+ ...
+ .build();
+}
+```
+
+#### Stopping a Job
+
+One of the most common use cases of`JobOperator` is gracefully stopping a
+Job:
+
+```
+Set executions = jobOperator.getRunningExecutions("sampleJob");
+jobOperator.stop(executions.iterator().next());
+```
+
+The shutdown is not immediate, since there is no way to force
+immediate shutdown, especially if the execution is currently in
+developer code that the framework has no control over, such as a
+business service. However, as soon as control is returned back to the
+framework, it will set the status of the current`StepExecution` to`BatchStatus.STOPPED`, save it, then do the same
+for the `JobExecution` before finishing.
+
+#### Aborting a Job
+
+A job execution which is `FAILED` can be
+restarted (if the `Job` is restartable). A job execution whose status is`ABANDONED` will not be restarted by the framework.
+The `ABANDONED` status is also used in step
+executions to mark them as skippable in a restarted job execution: if a
+job is executing and encounters a step that has been marked`ABANDONED` in the previous failed job execution, it
+will move on to the next step (as determined by the job flow definition
+and the step execution exit status).
+
+If the process died (`"kill -9"` or server
+failure) the job is, of course, not running, but the `JobRepository` has
+no way of knowing because no-one told it before the process died. You
+have to tell it manually that you know that the execution either failed
+or should be considered aborted (change its status to`FAILED` or `ABANDONED`) - it’s
+a business decision and there is no way to automate it. Only change the
+status to `FAILED` if it is not restartable, or if
+you know the restart data is valid. There is a utility in Spring Batch
+Admin `JobService` to abort a job execution.
\ No newline at end of file
diff --git a/docs/en/spring-batch/jsr-352.md b/docs/en/spring-batch/jsr-352.md
new file mode 100644
index 0000000000000000000000000000000000000000..6ccea008257b7304321dab4dc28e5e5192dd9ecc
--- /dev/null
+++ b/docs/en/spring-batch/jsr-352.md
@@ -0,0 +1,415 @@
+# JSR-352 Support
+
+## JSR-352 Support
+
+XMLJavaBoth
+
+As of Spring Batch 3.0 support for JSR-352 has been fully implemented. This section is not a replacement for
+the spec itself and instead, intends to explain how the JSR-352 specific concepts apply to Spring Batch.
+Additional information on JSR-352 can be found via the
+JCP here:
+
+### General Notes about Spring Batch and JSR-352
+
+Spring Batch and JSR-352 are structurally the same. They both have jobs that are made up of steps. They
+both have readers, processors, writers, and listeners. However, their interactions are subtly different.
+For example, the `org.springframework.batch.core.SkipListener#onSkipInWrite(S item, Throwable t)`within Spring Batch receives two parameters: the item that was skipped and the Exception that caused the
+skip. The JSR-352 version of the same method
+(`javax.batch.api.chunk.listener.SkipWriteListener#onSkipWriteItem(List items, Exception ex)`)
+also receives two parameters. However the first one is a `List` of all the items
+within the current chunk with the second being the `Exception` that caused the skip.
+Because of these differences, it is important to note that there are two paths to execute a job within
+Spring Batch: either a traditional Spring Batch job or a JSR-352 based job. While the use of Spring Batch
+artifacts (readers, writers, etc) will work within a job configured with JSR-352’s JSL and executed with the`JsrJobOperator`, they will behave according to the rules of JSR-352. It is also
+important to note that batch artifacts that have been developed against the JSR-352 interfaces will not work
+within a traditional Spring Batch job.
+
+### Setup
+
+#### Application Contexts
+
+All JSR-352 based jobs within Spring Batch consist of two application contexts. A parent context, that
+contains beans related to the infrastructure of Spring Batch such as the `JobRepository`,`PlatformTransactionManager`, etc and a child context that consists of the configuration
+of the job to be run. The parent context is defined via the `jsrBaseContext.xml` provided
+by the framework. This context may be overridden by setting the `JSR-352-BASE-CONTEXT` system
+property.
+
+| |The base context is not processed by the JSR-352 processors for things like property injection so no components requiring that additional processing should be configured there.|
+|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+#### Launching a JSR-352 based job
+
+JSR-352 requires a very simple path to executing a batch job. The following code is all that is needed to
+execute your first batch job:
+
+```
+JobOperator operator = BatchRuntime.getJobOperator();
+jobOperator.start("myJob", new Properties());
+```
+
+While that is convenient for developers, the devil is in the details. Spring Batch bootstraps a bit of
+infrastructure behind the scenes that a developer may want to override. The following is bootstrapped the
+first time `BatchRuntime.getJobOperator()` is called:
+
+| *Bean Name* | *Default Configuration* | *Notes* |
+|------------------------|-----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| dataSource | Apache DBCP BasicDataSource with configured values. | By default, HSQLDB is bootstrapped. |
+| `transactionManager` | `org.springframework.jdbc.datasource.DataSourceTransactionManager` | References the dataSource bean defined above. |
+|A Datasource initializer| | This is configured to execute the scripts configured via the`batch.drop.script` and `batch.schema.script` properties. By default, the schema scripts for HSQLDB are executed. This behavior can be disabled by setting the`batch.data.source.init` property. |
+| jobRepository | A JDBC based `SimpleJobRepository`. | This `JobRepository` uses the previously mentioned data source and transaction manager. The schema’s table prefix is configurable (defaults to BATCH\_) via the`batch.table.prefix` property. |
+| jobLauncher | `org.springframework.batch.core.launch.support.SimpleJobLauncher` | Used to launch jobs. |
+| batchJobOperator | `org.springframework.batch.core.launch.support.SimpleJobOperator` | The `JsrJobOperator` wraps this to provide most of it’s functionality. |
+| jobExplorer |`org.springframework.batch.core.explore.support.JobExplorerFactoryBean`| Used to address lookup functionality provided by the `JsrJobOperator`. |
+| jobParametersConverter | `org.springframework.batch.core.jsr.JsrJobParametersConverter` | JSR-352 specific implementation of the `JobParametersConverter`. |
+| jobRegistry | `org.springframework.batch.core.configuration.support.MapJobRegistry` | Used by the `SimpleJobOperator`. |
+| placeholderProperties |`org.springframework.beans.factory.config.PropertyPlaceholderConfigure`|Loads the properties file `batch-${ENVIRONMENT:hsql}.properties` to configure the properties mentioned above. ENVIRONMENT is a System property (defaults to `hsql`) that can be used to specify any of the supported databases Spring Batch currently supports.|
+
+| |None of the above beans are optional for executing JSR-352 based jobs. All may be overridden to provide customized functionality as needed.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------|
+
+### Dependency Injection
+
+JSR-352 is based heavily on the Spring Batch programming model. As such, while not explicitly requiring a
+formal dependency injection implementation, DI of some kind implied. Spring Batch supports all three
+methods for loading batch artifacts defined by JSR-352:
+
+* Implementation Specific Loader: Spring Batch is built upon Spring and so supports
+ Spring dependency injection within JSR-352 batch jobs.
+
+* Archive Loader: JSR-352 defines the existing of a `batch.xml` file that provides mappings
+ between a logical name and a class name. This file must be found within the `/META-INF/`directory if it is used.
+
+* Thread Context Class Loader: JSR-352 allows configurations to specify batch artifact
+ implementations in their JSL by providing the fully qualified class name inline. Spring
+ Batch supports this as well in JSR-352 configured jobs.
+
+To use Spring dependency injection within a JSR-352 based batch job consists of
+configuring batch artifacts using a Spring application context as beans. Once the beans
+have been defined, a job can refer to them as it would any bean defined within the`batch.xml` file.
+
+The following example shows how to use Spring dependency injection within a JSR-352 based
+batch job in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to use Spring dependency injection within a JSR-352 based
+batch job in Java:
+
+Java Configuration
+
+```
+@Configuration
+public class BatchConfiguration {
+
+ @Bean
+ public Batchlet fooBatchlet() {
+ FooBatchlet batchlet = new FooBatchlet();
+ batchlet.setProp("bar");
+ return batchlet;
+ }
+}
+
+
+
+
+
+
+
+```
+
+The assembly of Spring contexts (imports, etc) works with JSR-352 jobs just as it would with any other
+Spring based application. The only difference with a JSR-352 based job is that the entry point for the
+context definition will be the job definition found in /META-INF/batch-jobs/.
+
+To use the thread context class loader approach, all you need to do is provide the fully qualified class
+name as the ref. It is important to note that when using this approach or the `batch.xml` approach, the class
+referenced requires a no argument constructor which will be used to create the bean.
+
+```
+
+
+
+
+
+
+```
+
+### Batch Properties
+
+#### Property Support
+
+JSR-352 allows for properties to be defined at the Job, Step and batch artifact level by way of
+configuration in the JSL. Batch properties are configured at each level in the following way:
+
+```
+
+
+
+
+```
+
+`Properties` may be configured on any batch artifact.
+
+#### @BatchProperty annotation
+
+`Properties` are referenced in batch artifacts by annotating class fields with the`@BatchProperty` and `@Inject` annotations (both annotations
+are required by the spec). As defined by JSR-352, fields for properties must be String typed. Any type
+conversion is up to the implementing developer to perform.
+
+An `javax.batch.api.chunk.ItemReader` artifact could be configured with a
+properties block such as the one described above and accessed as such:
+
+```
+public class MyItemReader extends AbstractItemReader {
+ @Inject
+ @BatchProperty
+ private String propertyName1;
+
+ ...
+}
+```
+
+The value of the field "propertyName1" will be "propertyValue1"
+
+#### Property Substitution
+
+Property substitution is provided by way of operators and simple conditional expressions. The general
+usage is `#{operator['key']}`.
+
+Supported operators:
+
+* `jobParameters`: access job parameter values that the job was started/restarted with.
+
+* `jobProperties`: access properties configured at the job level of the JSL.
+
+* `systemProperties`: access named system properties.
+
+* `partitionPlan`: access named property from the partition plan of a partitioned step.
+
+```
+#{jobParameters['unresolving.prop']}?:#{systemProperties['file.separator']}
+```
+
+The left hand side of the assignment is the expected value, the right hand side is the
+default value. In the preceding
+example, the result will resolve to a value of the system property file.separator as
+#{jobParameters['unresolving.prop']} is assumed to not be resolvable. If neither
+expressions can be resolved, an empty String will be returned. Multiple conditions can be
+used, which are separated by a ';'.
+
+### Processing Models
+
+JSR-352 provides the same two basic processing models that Spring Batch does:
+
+* Item based processing - Using an `javax.batch.api.chunk.ItemReader`, an optional`javax.batch.api.chunk.ItemProcessor`, and an `javax.batch.api.chunk.ItemWriter`.
+
+* Task based processing - Using a `javax.batch.api.Batchlet`implementation. This processing model is the same as the`org.springframework.batch.core.step.tasklet.Tasklet` based processing
+ currently available.
+
+#### Item based processing
+
+Item based processing in this context is a chunk size being set by the number of items read by an`ItemReader`. To configure a step this way, specify the`item-count` (which defaults to 10) and optionally configure the`checkpoint-policy` as item (this is the default).
+
+```
+...
+
+
+
+
+
+
+
+...
+```
+
+If item-based checkpointing is chosen, an additional attribute `time-limit` is supported.
+This sets a time limit for how long the number of items specified has to be processed. If
+the timeout is reached, the chunk will complete with however many items have been read by
+then regardless of what the `item-count` is configured to be.
+
+#### Custom checkpointing
+
+JSR-352 calls the process around the commit interval within a step "checkpointing".
+Item-based checkpointing is one approach as mentioned above. However, this is not robust
+enough in many cases. Because of this, the spec allows for the implementation of a custom
+checkpointing algorithm by implementing the `javax.batch.api.chunk.CheckpointAlgorithm`interface. This functionality is functionally the same as Spring Batch’s custom completion
+policy. To use an implementation of `CheckpointAlgorithm`, configure your step with the
+custom `checkpoint-policy` as shown below where `fooCheckpointer` refers to an
+implementation of `CheckpointAlgorithm`.
+
+```
+...
+
+
+
+
+
+
+
+
+...
+```
+
+### Running a job
+
+The entrance to executing a JSR-352 based job is through the`javax.batch.operations.JobOperator`. Spring Batch provides its own implementation of
+this interface (`org.springframework.batch.core.jsr.launch.JsrJobOperator`). This
+implementation is loaded via the `javax.batch.runtime.BatchRuntime`. Launching a
+JSR-352 based batch job is implemented as follows:
+
+```
+JobOperator jobOperator = BatchRuntime.getJobOperator();
+long jobExecutionId = jobOperator.start("fooJob", new Properties());
+```
+
+The above code does the following:
+
+* Bootstraps a base `ApplicationContext`: In order to provide batch functionality, the
+ framework needs some infrastructure bootstrapped. This occurs once per JVM. The
+ components that are bootstrapped are similar to those provided by`@EnableBatchProcessing`. Specific details can be found in the javadoc for the`JsrJobOperator`.
+
+* Loads an `ApplicationContext` for the job requested: In the example
+ above, the framework looks in /META-INF/batch-jobs for a file named fooJob.xml and load a
+ context that is a child of the shared context mentioned previously.
+
+* Launch the job: The job defined within the context will be executed asynchronously.
+ The `JobExecution’s` ID will be returned.
+
+| |All JSR-352 based batch jobs are executed asynchronously.|
+|---|---------------------------------------------------------|
+
+When `JobOperator#start` is called using `SimpleJobOperator`, Spring Batch determines if
+the call is an initial run or a retry of a previously executed run. Using the JSR-352
+based `JobOperator#start(String jobXMLName, Properties jobParameters)`, the framework
+will always create a new JobInstance (JSR-352 job parameters are non-identifying). In order to
+restart a job, a call to`JobOperator#restart(long executionId, Properties restartParameters)` is required.
+
+### Contexts
+
+JSR-352 defines two context objects that are used to interact with the meta-data of a job or step from
+within a batch artifact: `javax.batch.runtime.context.JobContext` and`javax.batch.runtime.context.StepContext`. Both of these are available in any step
+level artifact (`Batchlet`, `ItemReader`, etc) with the`JobContext` being available to job level artifacts as well
+(`JobListener` for example).
+
+To obtain a reference to the `JobContext` or `StepContext`within the current scope, simply use the `@Inject` annotation:
+
+```
+@Inject
+JobContext jobContext;
+```
+
+| |@Autowire for JSR-352 contexts Using Spring’s @Autowire is not supported for the injection of these contexts.|
+|---|----------------------------------------------------------------------------------------------------------------------|
+
+In Spring Batch, the `JobContext` and `StepContext` wrap their
+corresponding execution objects (`JobExecution` and`StepExecution` respectively). Data stored through`StepContext#setPersistentUserData(Serializable data)` is stored in the
+Spring Batch `StepExecution#executionContext`.
+
+### Step Flow
+
+Within a JSR-352 based job, the flow of steps works similarly as it does within Spring Batch.
+However, there are a few subtle differences:
+
+* Decision’s are steps - In a regular Spring Batch job, a decision is a state that does not
+ have an independent `StepExecution` or any of the rights and
+ responsibilities that go along with being a full step.. However, with JSR-352, a decision
+ is a step just like any other and will behave just as any other steps (transactionality,
+ it gets a `StepExecution`, etc). This means that they are treated the
+ same as any other step on restarts as well.
+
+* `next` attribute and step transitions - In a regular job, these are
+ allowed to appear together in the same step. JSR-352 allows them to both be used in the
+ same step with the next attribute taking precedence in evaluation.
+
+* Transition element ordering - In a standard Spring Batch job, transition elements are
+ sorted from most specific to least specific and evaluated in that order. JSR-352 jobs
+ evaluate transition elements in the order they are specified in the XML.
+
+### Scaling a JSR-352 batch job
+
+Traditional Spring Batch jobs have four ways of scaling (the last two capable of being executed across
+multiple JVMs):
+
+* Split - Running multiple steps in parallel.
+
+* Multiple threads - Executing a single step via multiple threads.
+
+* Partitioning - Dividing the data up for parallel processing (manager/worker).
+
+* Remote Chunking - Executing the processor piece of logic remotely.
+
+JSR-352 provides two options for scaling batch jobs. Both options support only a single JVM:
+
+* Split - Same as Spring Batch
+
+* Partitioning - Conceptually the same as Spring Batch however implemented slightly different.
+
+#### Partitioning
+
+Conceptually, partitioning in JSR-352 is the same as it is in Spring Batch. Meta-data is provided
+to each worker to identify the input to be processed, with the workers reporting back to the manager the
+results upon completion. However, there are some important differences:
+
+* Partitioned `Batchlet` - This will run multiple instances of the
+ configured `Batchlet` on multiple threads. Each instance will have
+ it’s own set of properties as provided by the JSL or the`PartitionPlan`
+
+* `PartitionPlan` - With Spring Batch’s partitioning, an`ExecutionContext` is provided for each partition. With JSR-352, a
+ single `javax.batch.api.partition.PartitionPlan` is provided with an
+ array of `Properties` providing the meta-data for each partition.
+
+* `PartitionMapper` - JSR-352 provides two ways to generate partition
+ meta-data. One is via the JSL (partition properties). The second is via an implementation
+ of the `javax.batch.api.partition.PartitionMapper` interface.
+ Functionally, this interface is similar to the`org.springframework.batch.core.partition.support.Partitioner`interface provided by Spring Batch in that it provides a way to programmatically generate
+ meta-data for partitioning.
+
+* `StepExecutions` - In Spring Batch, partitioned steps are run as
+ manager/worker. Within JSR-352, the same configuration occurs. However, the worker steps do
+ not get official `StepExecutions`. Because of that, calls to`JsrJobOperator#getStepExecutions(long jobExecutionId)` will only
+ return the `StepExecution` for the manager.
+
+| |The child `StepExecutions` still exist in the job repository and are available through the `JobExplorer`.|
+|---|-------------------------------------------------------------------------------------------------------------|
+
+* Compensating logic - Since Spring Batch implements the manager/worker logic of
+ partitioning using steps, `StepExecutionListeners` can be used to
+ handle compensating logic if something goes wrong. However, since the workers JSR-352
+ provides a collection of other components for the ability to provide compensating logic when
+ errors occur and to dynamically set the exit status. These components include the following:
+
+| *Artifact Interface* | *Description* |
+|----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
+|`javax.batch.api.partition.PartitionCollector`| Provides a way for worker steps to send information back to the manager. There is one instance per worker thread. |
+|`javax.batch.api.partition.PartitionAnalyzer` |End point that receives the information collected by the`PartitionCollector` as well as the resulting statuses from a completed partition.|
+| `javax.batch.api.partition.PartitionReducer` | Provides the ability to provide compensating logic for a partitioned step. |
+
+### Testing
+
+Since all JSR-352 based jobs are executed asynchronously, it can be difficult to determine when a job has
+completed. To help with testing, Spring Batch provides the`org.springframework.batch.test.JsrTestUtils`. This utility class provides the
+ability to start a job and restart a job and wait for it to complete. Once the job completes, the
+associated `JobExecution` is returned.
\ No newline at end of file
diff --git a/docs/en/spring-batch/monitoring-and-metrics.md b/docs/en/spring-batch/monitoring-and-metrics.md
new file mode 100644
index 0000000000000000000000000000000000000000..86c682c6565b23981d300c690415b2dbb04a03a4
--- /dev/null
+++ b/docs/en/spring-batch/monitoring-and-metrics.md
@@ -0,0 +1,75 @@
+# Monitoring and metrics
+
+## Monitoring and metrics
+
+Since version 4.2, Spring Batch provides support for batch monitoring and metrics
+based on [Micrometer](https://micrometer.io/). This section describes
+which metrics are provided out-of-the-box and how to contribute custom metrics.
+
+### Built-in metrics
+
+Metrics collection does not require any specific configuration. All metrics provided
+by the framework are registered in[Micrometer’s global registry](https://micrometer.io/docs/concepts#_global_registry)under the `spring.batch` prefix. The following table explains all the metrics in details:
+
+| *Metric Name* | *Type* | *Description* | *Tags* |
+|---------------------------|-----------------|---------------------------|---------------------------------|
+| `spring.batch.job` | `TIMER` | Duration of job execution | `name`, `status` |
+| `spring.batch.job.active` |`LONG_TASK_TIMER`| Currently active jobs | `name` |
+| `spring.batch.step` | `TIMER` |Duration of step execution | `name`, `job.name`, `status` |
+| `spring.batch.item.read` | `TIMER` | Duration of item reading |`job.name`, `step.name`, `status`|
+|`spring.batch.item.process`| `TIMER` |Duration of item processing|`job.name`, `step.name`, `status`|
+|`spring.batch.chunk.write` | `TIMER` | Duration of chunk writing |`job.name`, `step.name`, `status`|
+
+| |The `status` tag can be either `SUCCESS` or `FAILURE`.|
+|---|------------------------------------------------------|
+
+### Custom metrics
+
+If you want to use your own metrics in your custom components, we recommend using
+Micrometer APIs directly. The following is an example of how to time a `Tasklet`:
+
+```
+import io.micrometer.core.instrument.Metrics;
+import io.micrometer.core.instrument.Timer;
+
+import org.springframework.batch.core.StepContribution;
+import org.springframework.batch.core.scope.context.ChunkContext;
+import org.springframework.batch.core.step.tasklet.Tasklet;
+import org.springframework.batch.repeat.RepeatStatus;
+
+public class MyTimedTasklet implements Tasklet {
+
+ @Override
+ public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) {
+ Timer.Sample sample = Timer.start(Metrics.globalRegistry);
+ String status = "success";
+ try {
+ // do some work
+ } catch (Exception e) {
+ // handle exception
+ status = "failure";
+ } finally {
+ sample.stop(Timer.builder("my.tasklet.timer")
+ .description("Duration of MyTimedTasklet")
+ .tag("status", status)
+ .register(Metrics.globalRegistry));
+ }
+ return RepeatStatus.FINISHED;
+ }
+}
+```
+
+### Disabling metrics
+
+Metrics collection is a concern similar to logging. Disabling logs is typically
+done by configuring the logging library and this is no different for metrics.
+There is no feature in Spring Batch to disable micrometer’s metrics, this should
+be done on micrometer’s side. Since Spring Batch stores metrics in the global
+registry of micrometer with the `spring.batch` prefix, it is possible to configure
+micrometer to ignore/deny batch metrics with the following snippet:
+
+```
+Metrics.globalRegistry.config().meterFilter(MeterFilter.denyNameStartsWith("spring.batch"))
+```
+
+Please refer to micrometer’s [reference documentation](http://micrometer.io/docs/concepts#_meter_filters)for more details.
\ No newline at end of file
diff --git a/docs/en/spring-batch/processor.md b/docs/en/spring-batch/processor.md
new file mode 100644
index 0000000000000000000000000000000000000000..a25e49301d882f1261e37ee8964b1b4a7f8168ac
--- /dev/null
+++ b/docs/en/spring-batch/processor.md
@@ -0,0 +1,347 @@
+# Item processing
+
+## Item processing
+
+XMLJavaBoth
+
+The [ItemReader and ItemWriter interfaces](readersAndWriters.html#readersAndWriters) are both very useful for their specific
+tasks, but what if you want to insert business logic before writing? One option for both
+reading and writing is to use the composite pattern: Create an `ItemWriter` that contains
+another `ItemWriter` or an `ItemReader` that contains another `ItemReader`. The following
+code shows an example:
+
+```
+public class CompositeItemWriter implements ItemWriter {
+
+ ItemWriter itemWriter;
+
+ public CompositeItemWriter(ItemWriter itemWriter) {
+ this.itemWriter = itemWriter;
+ }
+
+ public void write(List extends T> items) throws Exception {
+ //Add business logic here
+ itemWriter.write(items);
+ }
+
+ public void setDelegate(ItemWriter itemWriter){
+ this.itemWriter = itemWriter;
+ }
+}
+```
+
+The preceding class contains another `ItemWriter` to which it delegates after having
+provided some business logic. This pattern could easily be used for an `ItemReader` as
+well, perhaps to obtain more reference data based upon the input that was provided by the
+main `ItemReader`. It is also useful if you need to control the call to `write` yourself.
+However, if you only want to 'transform' the item passed in for writing before it is
+actually written, you need not `write` yourself. You can just modify the item. For this
+scenario, Spring Batch provides the `ItemProcessor` interface, as shown in the following
+interface definition:
+
+```
+public interface ItemProcessor {
+
+ O process(I item) throws Exception;
+}
+```
+
+An `ItemProcessor` is simple. Given one object, transform it and return another. The
+provided object may or may not be of the same type. The point is that business logic may
+be applied within the process, and it is completely up to the developer to create that
+logic. An `ItemProcessor` can be wired directly into a step. For example, assume an`ItemReader` provides a class of type `Foo` and that it needs to be converted to type `Bar`before being written out. The following example shows an `ItemProcessor` that performs
+the conversion:
+
+```
+public class Foo {}
+
+public class Bar {
+ public Bar(Foo foo) {}
+}
+
+public class FooProcessor implements ItemProcessor {
+ public Bar process(Foo foo) throws Exception {
+ //Perform simple transformation, convert a Foo to a Bar
+ return new Bar(foo);
+ }
+}
+
+public class BarWriter implements ItemWriter {
+ public void write(List extends Bar> bars) throws Exception {
+ //write bars
+ }
+}
+```
+
+In the preceding example, there is a class `Foo`, a class `Bar`, and a class`FooProcessor` that adheres to the `ItemProcessor` interface. The transformation is
+simple, but any type of transformation could be done here. The `BarWriter` writes `Bar`objects, throwing an exception if any other type is provided. Similarly, the`FooProcessor` throws an exception if anything but a `Foo` is provided. The`FooProcessor` can then be injected into a `Step`, as shown in the following example:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+```
+
+Java Configuration
+
+```
+@Bean
+public Job ioSampleJob() {
+ return this.jobBuilderFactory.get("ioSampleJob")
+ .start(step1())
+ .build();
+}
+
+@Bean
+public Step step1() {
+ return this.stepBuilderFactory.get("step1")
+ .chunk(2)
+ .reader(fooReader())
+ .processor(fooProcessor())
+ .writer(barWriter())
+ .build();
+}
+```
+
+A difference between `ItemProcessor` and `ItemReader` or `ItemWriter` is that an `ItemProcessor`is optional for a `Step`.
+
+### Chaining ItemProcessors
+
+Performing a single transformation is useful in many scenarios, but what if you want to
+'chain' together multiple `ItemProcessor` implementations? This can be accomplished using
+the composite pattern mentioned previously. To update the previous, single
+transformation, example, `Foo` is transformed to `Bar`, which is transformed to `Foobar`and written out, as shown in the following example:
+
+```
+public class Foo {}
+
+public class Bar {
+ public Bar(Foo foo) {}
+}
+
+public class Foobar {
+ public Foobar(Bar bar) {}
+}
+
+public class FooProcessor implements ItemProcessor {
+ public Bar process(Foo foo) throws Exception {
+ //Perform simple transformation, convert a Foo to a Bar
+ return new Bar(foo);
+ }
+}
+
+public class BarProcessor implements ItemProcessor {
+ public Foobar process(Bar bar) throws Exception {
+ return new Foobar(bar);
+ }
+}
+
+public class FoobarWriter implements ItemWriter{
+ public void write(List extends Foobar> items) throws Exception {
+ //write items
+ }
+}
+```
+
+A `FooProcessor` and a `BarProcessor` can be 'chained' together to give the resultant`Foobar`, as shown in the following example:
+
+```
+CompositeItemProcessor compositeProcessor =
+ new CompositeItemProcessor();
+List itemProcessors = new ArrayList();
+itemProcessors.add(new FooProcessor());
+itemProcessors.add(new BarProcessor());
+compositeProcessor.setDelegates(itemProcessors);
+```
+
+Just as with the previous example, the composite processor can be configured into the`Step`:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Java Configuration
+
+```
+@Bean
+public Job ioSampleJob() {
+ return this.jobBuilderFactory.get("ioSampleJob")
+ .start(step1())
+ .build();
+}
+
+@Bean
+public Step step1() {
+ return this.stepBuilderFactory.get("step1")
+ .chunk(2)
+ .reader(fooReader())
+ .processor(compositeProcessor())
+ .writer(foobarWriter())
+ .build();
+}
+
+@Bean
+public CompositeItemProcessor compositeProcessor() {
+ List delegates = new ArrayList<>(2);
+ delegates.add(new FooProcessor());
+ delegates.add(new BarProcessor());
+
+ CompositeItemProcessor processor = new CompositeItemProcessor();
+
+ processor.setDelegates(delegates);
+
+ return processor;
+}
+```
+
+### Filtering Records
+
+One typical use for an item processor is to filter out records before they are passed to
+the `ItemWriter`. Filtering is an action distinct from skipping. Skipping indicates that
+a record is invalid, while filtering simply indicates that a record should not be
+written.
+
+For example, consider a batch job that reads a file containing three different types of
+records: records to insert, records to update, and records to delete. If record deletion
+is not supported by the system, then we would not want to send any "delete" records to
+the `ItemWriter`. But, since these records are not actually bad records, we would want to
+filter them out rather than skip them. As a result, the `ItemWriter` would receive only
+"insert" and "update" records.
+
+To filter a record, you can return `null` from the `ItemProcessor`. The framework detects
+that the result is `null` and avoids adding that item to the list of records delivered to
+the `ItemWriter`. As usual, an exception thrown from the `ItemProcessor` results in a
+skip.
+
+### Validating Input
+
+In the [ItemReaders and ItemWriters](readersAndWriters.html#readersAndWriters) chapter, multiple approaches to parsing input have been
+discussed. Each major implementation throws an exception if it is not 'well-formed'. The`FixedLengthTokenizer` throws an exception if a range of data is missing. Similarly,
+attempting to access an index in a `RowMapper` or `FieldSetMapper` that does not exist or
+is in a different format than the one expected causes an exception to be thrown. All of
+these types of exceptions are thrown before `read` returns. However, they do not address
+the issue of whether or not the returned item is valid. For example, if one of the fields
+is an age, it obviously cannot be negative. It may parse correctly, because it exists and
+is a number, but it does not cause an exception. Since there are already a plethora of
+validation frameworks, Spring Batch does not attempt to provide yet another. Rather, it
+provides a simple interface, called `Validator`, that can be implemented by any number of
+frameworks, as shown in the following interface definition:
+
+```
+public interface Validator {
+
+ void validate(T value) throws ValidationException;
+
+}
+```
+
+The contract is that the `validate` method throws an exception if the object is invalid
+and returns normally if it is valid. Spring Batch provides an out of the box`ValidatingItemProcessor`, as shown in the following bean definition:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+```
+
+Java Configuration
+
+```
+@Bean
+public ValidatingItemProcessor itemProcessor() {
+ ValidatingItemProcessor processor = new ValidatingItemProcessor();
+
+ processor.setValidator(validator());
+
+ return processor;
+}
+
+@Bean
+public SpringValidator validator() {
+ SpringValidator validator = new SpringValidator();
+
+ validator.setValidator(new TradeValidator());
+
+ return validator;
+}
+```
+
+You can also use the `BeanValidatingItemProcessor` to validate items annotated with
+the Bean Validation API (JSR-303) annotations. For example, given the following type `Person`:
+
+```
+class Person {
+
+ @NotEmpty
+ private String name;
+
+ public Person(String name) {
+ this.name = name;
+ }
+
+ public String getName() {
+ return name;
+ }
+
+ public void setName(String name) {
+ this.name = name;
+ }
+
+}
+```
+
+you can validate items by declaring a `BeanValidatingItemProcessor` bean in your
+application context and register it as a processor in your chunk-oriented step:
+
+```
+@Bean
+public BeanValidatingItemProcessor beanValidatingItemProcessor() throws Exception {
+ BeanValidatingItemProcessor beanValidatingItemProcessor = new BeanValidatingItemProcessor<>();
+ beanValidatingItemProcessor.setFilter(true);
+
+ return beanValidatingItemProcessor;
+}
+```
+
+### Fault Tolerance
+
+When a chunk is rolled back, items that have been cached during reading may be
+reprocessed. If a step is configured to be fault tolerant (typically by using skip or
+retry processing), any `ItemProcessor` used should be implemented in a way that is
+idempotent. Typically that would consist of performing no changes on the input item for
+the `ItemProcessor` and only updating the
+instance that is the result.
\ No newline at end of file
diff --git a/docs/en/spring-batch/readersAndWriters.md b/docs/en/spring-batch/readersAndWriters.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d7cea907527e6ba3b545ee92acd63df1e2088aa
--- /dev/null
+++ b/docs/en/spring-batch/readersAndWriters.md
@@ -0,0 +1,2760 @@
+# ItemReaders and ItemWriters
+
+## ItemReaders and ItemWriters
+
+XMLJavaBoth
+
+All batch processing can be described in its most simple form as reading in large amounts
+of data, performing some type of calculation or transformation, and writing the result
+out. Spring Batch provides three key interfaces to help perform bulk reading and writing:`ItemReader`, `ItemProcessor`, and `ItemWriter`.
+
+### `ItemReader`
+
+Although a simple concept, an `ItemReader` is the means for providing data from many
+different types of input. The most general examples include:
+
+* Flat File: Flat-file item readers read lines of data from a flat file that typically
+ describes records with fields of data defined by fixed positions in the file or delimited
+ by some special character (such as a comma).
+
+* XML: XML `ItemReaders` process XML independently of technologies used for parsing,
+ mapping and validating objects. Input data allows for the validation of an XML file
+ against an XSD schema.
+
+* Database: A database resource is accessed to return resultsets which can be mapped to
+ objects for processing. The default SQL `ItemReader` implementations invoke a `RowMapper`to return objects, keep track of the current row if restart is required, store basic
+ statistics, and provide some transaction enhancements that are explained later.
+
+There are many more possibilities, but we focus on the basic ones for this chapter. A
+complete list of all available `ItemReader` implementations can be found in[Appendix A](appendix.html#listOfReadersAndWriters).
+
+`ItemReader` is a basic interface for generic
+input operations, as shown in the following interface definition:
+
+```
+public interface ItemReader {
+
+ T read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException;
+
+}
+```
+
+The `read` method defines the most essential contract of the `ItemReader`. Calling it
+returns one item or `null` if no more items are left. An item might represent a line in a
+file, a row in a database, or an element in an XML file. It is generally expected that
+these are mapped to a usable domain object (such as `Trade`, `Foo`, or others), but there
+is no requirement in the contract to do so.
+
+It is expected that implementations of the `ItemReader` interface are forward only.
+However, if the underlying resource is transactional (such as a JMS queue) then calling`read` may return the same logical item on subsequent calls in a rollback scenario. It is
+also worth noting that a lack of items to process by an `ItemReader` does not cause an
+exception to be thrown. For example, a database `ItemReader` that is configured with a
+query that returns 0 results returns `null` on the first invocation of `read`.
+
+### `ItemWriter`
+
+`ItemWriter` is similar in functionality to an `ItemReader` but with inverse operations.
+Resources still need to be located, opened, and closed but they differ in that an`ItemWriter` writes out, rather than reading in. In the case of databases or queues,
+these operations may be inserts, updates, or sends. The format of the serialization of
+the output is specific to each batch job.
+
+As with `ItemReader`,`ItemWriter` is a fairly generic interface, as shown in the following interface definition:
+
+```
+public interface ItemWriter {
+
+ void write(List extends T> items) throws Exception;
+
+}
+```
+
+As with `read` on `ItemReader`, `write` provides the basic contract of `ItemWriter`. It
+attempts to write out the list of items passed in as long as it is open. Because it is
+generally expected that items are 'batched' together into a chunk and then output, the
+interface accepts a list of items, rather than an item by itself. After writing out the
+list, any flushing that may be necessary can be performed before returning from the write
+method. For example, if writing to a Hibernate DAO, multiple calls to write can be made,
+one for each item. The writer can then call `flush` on the hibernate session before
+returning.
+
+### `ItemStream`
+
+Both `ItemReaders` and `ItemWriters` serve their individual purposes well, but there is a
+common concern among both of them that necessitates another interface. In general, as
+part of the scope of a batch job, readers and writers need to be opened, closed, and
+require a mechanism for persisting state. The `ItemStream` interface serves that purpose,
+as shown in the following example:
+
+```
+public interface ItemStream {
+
+ void open(ExecutionContext executionContext) throws ItemStreamException;
+
+ void update(ExecutionContext executionContext) throws ItemStreamException;
+
+ void close() throws ItemStreamException;
+}
+```
+
+Before describing each method, we should mention the `ExecutionContext`. Clients of an`ItemReader` that also implement `ItemStream` should call `open` before any calls to`read`, in order to open any resources such as files or to obtain connections. A similar
+restriction applies to an `ItemWriter` that implements `ItemStream`. As mentioned in
+Chapter 2, if expected data is found in the `ExecutionContext`, it may be used to start
+the `ItemReader` or `ItemWriter` at a location other than its initial state. Conversely,`close` is called to ensure that any resources allocated during open are released safely.`update` is called primarily to ensure that any state currently being held is loaded into
+the provided `ExecutionContext`. This method is called before committing, to ensure that
+the current state is persisted in the database before commit.
+
+In the special case where the client of an `ItemStream` is a `Step` (from the Spring
+Batch Core), an `ExecutionContext` is created for each StepExecution to allow users to
+store the state of a particular execution, with the expectation that it is returned if
+the same `JobInstance` is started again. For those familiar with Quartz, the semantics
+are very similar to a Quartz `JobDataMap`.
+
+### The Delegate Pattern and Registering with the Step
+
+Note that the `CompositeItemWriter` is an example of the delegation pattern, which is
+common in Spring Batch. The delegates themselves might implement callback interfaces,
+such as `StepListener`. If they do and if they are being used in conjunction with Spring
+Batch Core as part of a `Step` in a `Job`, then they almost certainly need to be
+registered manually with the `Step`. A reader, writer, or processor that is directly
+wired into the `Step` gets registered automatically if it implements `ItemStream` or a`StepListener` interface. However, because the delegates are not known to the `Step`,
+they need to be injected as listeners or streams (or both if appropriate).
+
+The following example shows how to inject a delegate as a stream in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to inject a delegate as a stream in XML:
+
+Java Configuration
+
+```
+@Bean
+public Job ioSampleJob() {
+ return this.jobBuilderFactory.get("ioSampleJob")
+ .start(step1())
+ .build();
+}
+
+@Bean
+public Step step1() {
+ return this.stepBuilderFactory.get("step1")
+ .chunk(2)
+ .reader(fooReader())
+ .processor(fooProcessor())
+ .writer(compositeItemWriter())
+ .stream(barWriter())
+ .build();
+}
+
+@Bean
+public CustomCompositeItemWriter compositeItemWriter() {
+
+ CustomCompositeItemWriter writer = new CustomCompositeItemWriter();
+
+ writer.setDelegate(barWriter());
+
+ return writer;
+}
+
+@Bean
+public BarWriter barWriter() {
+ return new BarWriter();
+}
+```
+
+### Flat Files
+
+One of the most common mechanisms for interchanging bulk data has always been the flat
+file. Unlike XML, which has an agreed upon standard for defining how it is structured
+(XSD), anyone reading a flat file must understand ahead of time exactly how the file is
+structured. In general, all flat files fall into two types: delimited and fixed length.
+Delimited files are those in which fields are separated by a delimiter, such as a comma.
+Fixed Length files have fields that are a set length.
+
+#### The `FieldSet`
+
+When working with flat files in Spring Batch, regardless of whether it is for input or
+output, one of the most important classes is the `FieldSet`. Many architectures and
+libraries contain abstractions for helping you read in from a file, but they usually
+return a `String` or an array of `String` objects. This really only gets you halfway
+there. A `FieldSet` is Spring Batch’s abstraction for enabling the binding of fields from
+a file resource. It allows developers to work with file input in much the same way as
+they would work with database input. A `FieldSet` is conceptually similar to a JDBC`ResultSet`. A `FieldSet` requires only one argument: a `String` array of tokens.
+Optionally, you can also configure the names of the fields so that the fields may be
+accessed either by index or name as patterned after `ResultSet`, as shown in the following
+example:
+
+```
+String[] tokens = new String[]{"foo", "1", "true"};
+FieldSet fs = new DefaultFieldSet(tokens);
+String name = fs.readString(0);
+int value = fs.readInt(1);
+boolean booleanValue = fs.readBoolean(2);
+```
+
+There are many more options on the `FieldSet` interface, such as `Date`, long,`BigDecimal`, and so on. The biggest advantage of the `FieldSet` is that it provides
+consistent parsing of flat file input. Rather than each batch job parsing differently in
+potentially unexpected ways, it can be consistent, both when handling errors caused by a
+format exception, or when doing simple data conversions.
+
+#### `FlatFileItemReader`
+
+A flat file is any type of file that contains at most two-dimensional (tabular) data.
+Reading flat files in the Spring Batch framework is facilitated by the class called`FlatFileItemReader`, which provides basic functionality for reading and parsing flat
+files. The two most important required dependencies of `FlatFileItemReader` are`Resource` and `LineMapper`. The `LineMapper` interface is explored more in the next
+sections. The resource property represents a Spring Core `Resource`. Documentation
+explaining how to create beans of this type can be found in[Spring
+Framework, Chapter 5. Resources](https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#resources). Therefore, this guide does not go into the details of
+creating `Resource` objects beyond showing the following simple example:
+
+```
+Resource resource = new FileSystemResource("resources/trades.csv");
+```
+
+In complex batch environments, the directory structures are often managed by the Enterprise Application Integration (EAI)
+infrastructure, where drop zones for external interfaces are established for moving files
+from FTP locations to batch processing locations and vice versa. File moving utilities
+are beyond the scope of the Spring Batch architecture, but it is not unusual for batch
+job streams to include file moving utilities as steps in the job stream. The batch
+architecture only needs to know how to locate the files to be processed. Spring Batch
+begins the process of feeding the data into the pipe from this starting point. However,[Spring Integration](https://projects.spring.io/spring-integration/) provides many
+of these types of services.
+
+The other properties in `FlatFileItemReader` let you further specify how your data is
+interpreted, as described in the following table:
+
+| Property | Type | Description |
+|---------------------|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| comments | String[] | Specifies line prefixes that indicate comment rows. |
+| encoding | String | Specifies what text encoding to use. The default is the value of `Charset.defaultCharset()`. |
+| lineMapper | `LineMapper` | Converts a `String` to an `Object` representing the item. |
+| linesToSkip | int | Number of lines to ignore at the top of the file. |
+|recordSeparatorPolicy|RecordSeparatorPolicy| Used to determine where the line endings are and do things like continue over a line ending if inside a quoted string. |
+| resource | `Resource` | The resource from which to read. |
+|skippedLinesCallback | LineCallbackHandler |Interface that passes the raw line content of the lines in the file to be skipped. If `linesToSkip` is set to 2, then this interface is called twice.|
+| strict | boolean |In strict mode, the reader throws an exception on `ExecutionContext` if the input resource does not exist. Otherwise, it logs the problem and continues. |
+
+##### `LineMapper`
+
+As with `RowMapper`, which takes a low-level construct such as `ResultSet` and returns
+an `Object`, flat file processing requires the same construct to convert a `String` line
+into an `Object`, as shown in the following interface definition:
+
+```
+public interface LineMapper {
+
+ T mapLine(String line, int lineNumber) throws Exception;
+
+}
+```
+
+The basic contract is that, given the current line and the line number with which it is
+associated, the mapper should return a resulting domain object. This is similar to`RowMapper`, in that each line is associated with its line number, just as each row in a`ResultSet` is tied to its row number. This allows the line number to be tied to the
+resulting domain object for identity comparison or for more informative logging. However,
+unlike `RowMapper`, the `LineMapper` is given a raw line which, as discussed above, only
+gets you halfway there. The line must be tokenized into a `FieldSet`, which can then be
+mapped to an object, as described later in this document.
+
+##### `LineTokenizer`
+
+An abstraction for turning a line of input into a `FieldSet` is necessary because there
+can be many formats of flat file data that need to be converted to a `FieldSet`. In
+Spring Batch, this interface is the `LineTokenizer`:
+
+```
+public interface LineTokenizer {
+
+ FieldSet tokenize(String line);
+
+}
+```
+
+The contract of a `LineTokenizer` is such that, given a line of input (in theory the`String` could encompass more than one line), a `FieldSet` representing the line is
+returned. This `FieldSet` can then be passed to a `FieldSetMapper`. Spring Batch contains
+the following `LineTokenizer` implementations:
+
+* `DelimitedLineTokenizer`: Used for files where fields in a record are separated by a
+ delimiter. The most common delimiter is a comma, but pipes or semicolons are often used
+ as well.
+
+* `FixedLengthTokenizer`: Used for files where fields in a record are each a "fixed
+ width". The width of each field must be defined for each record type.
+
+* `PatternMatchingCompositeLineTokenizer`: Determines which `LineTokenizer` among a list of
+ tokenizers should be used on a particular line by checking against a pattern.
+
+##### `FieldSetMapper`
+
+The `FieldSetMapper` interface defines a single method, `mapFieldSet`, which takes a`FieldSet` object and maps its contents to an object. This object may be a custom DTO, a
+domain object, or an array, depending on the needs of the job. The `FieldSetMapper` is
+used in conjunction with the `LineTokenizer` to translate a line of data from a resource
+into an object of the desired type, as shown in the following interface definition:
+
+```
+public interface FieldSetMapper {
+
+ T mapFieldSet(FieldSet fieldSet) throws BindException;
+
+}
+```
+
+The pattern used is the same as the `RowMapper` used by `JdbcTemplate`.
+
+##### `DefaultLineMapper`
+
+Now that the basic interfaces for reading in flat files have been defined, it becomes
+clear that three basic steps are required:
+
+1. Read one line from the file.
+
+2. Pass the `String` line into the `LineTokenizer#tokenize()` method to retrieve a`FieldSet`.
+
+3. Pass the `FieldSet` returned from tokenizing to a `FieldSetMapper`, returning the
+ result from the `ItemReader#read()` method.
+
+The two interfaces described above represent two separate tasks: converting a line into a`FieldSet` and mapping a `FieldSet` to a domain object. Because the input of a`LineTokenizer` matches the input of the `LineMapper` (a line), and the output of a`FieldSetMapper` matches the output of the `LineMapper`, a default implementation that
+uses both a `LineTokenizer` and a `FieldSetMapper` is provided. The `DefaultLineMapper`,
+shown in the following class definition, represents the behavior most users need:
+
+```
+public class DefaultLineMapper implements LineMapper<>, InitializingBean {
+
+ private LineTokenizer tokenizer;
+
+ private FieldSetMapper fieldSetMapper;
+
+ public T mapLine(String line, int lineNumber) throws Exception {
+ return fieldSetMapper.mapFieldSet(tokenizer.tokenize(line));
+ }
+
+ public void setLineTokenizer(LineTokenizer tokenizer) {
+ this.tokenizer = tokenizer;
+ }
+
+ public void setFieldSetMapper(FieldSetMapper fieldSetMapper) {
+ this.fieldSetMapper = fieldSetMapper;
+ }
+}
+```
+
+The above functionality is provided in a default implementation, rather than being built
+into the reader itself (as was done in previous versions of the framework) to allow users
+greater flexibility in controlling the parsing process, especially if access to the raw
+line is needed.
+
+##### Simple Delimited File Reading Example
+
+The following example illustrates how to read a flat file with an actual domain scenario.
+This particular batch job reads in football players from the following file:
+
+```
+ID,lastName,firstName,position,birthYear,debutYear
+"AbduKa00,Abdul-Jabbar,Karim,rb,1974,1996",
+"AbduRa00,Abdullah,Rabih,rb,1975,1999",
+"AberWa00,Abercrombie,Walter,rb,1959,1982",
+"AbraDa00,Abramowicz,Danny,wr,1945,1967",
+"AdamBo00,Adams,Bob,te,1946,1969",
+"AdamCh00,Adams,Charlie,wr,1979,2003"
+```
+
+The contents of this file are mapped to the following`Player` domain object:
+
+```
+public class Player implements Serializable {
+
+ private String ID;
+ private String lastName;
+ private String firstName;
+ private String position;
+ private int birthYear;
+ private int debutYear;
+
+ public String toString() {
+ return "PLAYER:ID=" + ID + ",Last Name=" + lastName +
+ ",First Name=" + firstName + ",Position=" + position +
+ ",Birth Year=" + birthYear + ",DebutYear=" +
+ debutYear;
+ }
+
+ // setters and getters...
+}
+```
+
+To map a `FieldSet` into a `Player` object, a `FieldSetMapper` that returns players needs
+to be defined, as shown in the following example:
+
+```
+protected static class PlayerFieldSetMapper implements FieldSetMapper {
+ public Player mapFieldSet(FieldSet fieldSet) {
+ Player player = new Player();
+
+ player.setID(fieldSet.readString(0));
+ player.setLastName(fieldSet.readString(1));
+ player.setFirstName(fieldSet.readString(2));
+ player.setPosition(fieldSet.readString(3));
+ player.setBirthYear(fieldSet.readInt(4));
+ player.setDebutYear(fieldSet.readInt(5));
+
+ return player;
+ }
+}
+```
+
+The file can then be read by correctly constructing a `FlatFileItemReader` and calling`read`, as shown in the following example:
+
+```
+FlatFileItemReader itemReader = new FlatFileItemReader<>();
+itemReader.setResource(new FileSystemResource("resources/players.csv"));
+DefaultLineMapper lineMapper = new DefaultLineMapper<>();
+//DelimitedLineTokenizer defaults to comma as its delimiter
+lineMapper.setLineTokenizer(new DelimitedLineTokenizer());
+lineMapper.setFieldSetMapper(new PlayerFieldSetMapper());
+itemReader.setLineMapper(lineMapper);
+itemReader.open(new ExecutionContext());
+Player player = itemReader.read();
+```
+
+Each call to `read` returns a new`Player` object from each line in the file. When the end of the file is
+reached, `null` is returned.
+
+##### Mapping Fields by Name
+
+There is one additional piece of functionality that is allowed by both`DelimitedLineTokenizer` and `FixedLengthTokenizer` and that is similar in function to a
+JDBC `ResultSet`. The names of the fields can be injected into either of these`LineTokenizer` implementations to increase the readability of the mapping function.
+First, the column names of all fields in the flat file are injected into the tokenizer,
+as shown in the following example:
+
+```
+tokenizer.setNames(new String[] {"ID", "lastName", "firstName", "position", "birthYear", "debutYear"});
+```
+
+A `FieldSetMapper` can use this information as follows:
+
+```
+public class PlayerMapper implements FieldSetMapper {
+ public Player mapFieldSet(FieldSet fs) {
+
+ if (fs == null) {
+ return null;
+ }
+
+ Player player = new Player();
+ player.setID(fs.readString("ID"));
+ player.setLastName(fs.readString("lastName"));
+ player.setFirstName(fs.readString("firstName"));
+ player.setPosition(fs.readString("position"));
+ player.setDebutYear(fs.readInt("debutYear"));
+ player.setBirthYear(fs.readInt("birthYear"));
+
+ return player;
+ }
+}
+```
+
+##### Automapping FieldSets to Domain Objects
+
+For many, having to write a specific `FieldSetMapper` is equally as cumbersome as writing
+a specific `RowMapper` for a `JdbcTemplate`. Spring Batch makes this easier by providing
+a `FieldSetMapper` that automatically maps fields by matching a field name with a setter
+on the object using the JavaBean specification.
+
+Again using the football example, the `BeanWrapperFieldSetMapper` configuration looks like
+the following snippet in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+```
+
+Again using the football example, the `BeanWrapperFieldSetMapper` configuration looks like
+the following snippet in Java:
+
+Java Configuration
+
+```
+@Bean
+public FieldSetMapper fieldSetMapper() {
+ BeanWrapperFieldSetMapper fieldSetMapper = new BeanWrapperFieldSetMapper();
+
+ fieldSetMapper.setPrototypeBeanName("player");
+
+ return fieldSetMapper;
+}
+
+@Bean
+@Scope("prototype")
+public Player player() {
+ return new Player();
+}
+```
+
+For each entry in the `FieldSet`, the mapper looks for a corresponding setter on a new
+instance of the `Player` object (for this reason, prototype scope is required) in the
+same way the Spring container looks for setters matching a property name. Each available
+field in the `FieldSet` is mapped, and the resultant `Player` object is returned, with no
+code required.
+
+##### Fixed Length File Formats
+
+So far, only delimited files have been discussed in much detail. However, they represent
+only half of the file reading picture. Many organizations that use flat files use fixed
+length formats. An example fixed length file follows:
+
+```
+UK21341EAH4121131.11customer1
+UK21341EAH4221232.11customer2
+UK21341EAH4321333.11customer3
+UK21341EAH4421434.11customer4
+UK21341EAH4521535.11customer5
+```
+
+While this looks like one large field, it actually represent 4 distinct fields:
+
+1. ISIN: Unique identifier for the item being ordered - 12 characters long.
+
+2. Quantity: Number of the item being ordered - 3 characters long.
+
+3. Price: Price of the item - 5 characters long.
+
+4. Customer: ID of the customer ordering the item - 9 characters long.
+
+When configuring the `FixedLengthLineTokenizer`, each of these lengths must be provided
+in the form of ranges.
+
+The following example shows how to define ranges for the `FixedLengthLineTokenizer` in
+XML:
+
+XML Configuration
+
+```
+
+
+
+
+```
+
+Because the `FixedLengthLineTokenizer` uses the same `LineTokenizer` interface as
+discussed earlier, it returns the same `FieldSet` as if a delimiter had been used. This
+allows the same approaches to be used in handling its output, such as using the`BeanWrapperFieldSetMapper`.
+
+| |Supporting the preceding syntax for ranges requires that a specialized property editor,`RangeArrayPropertyEditor`, be configured in the `ApplicationContext`. However, this bean is automatically declared in an `ApplicationContext` where the batch namespace is used.|
+|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+The following example shows how to define ranges for the `FixedLengthLineTokenizer` in
+Java:
+
+Java Configuration
+
+```
+@Bean
+public FixedLengthTokenizer fixedLengthTokenizer() {
+ FixedLengthTokenizer tokenizer = new FixedLengthTokenizer();
+
+ tokenizer.setNames("ISIN", "Quantity", "Price", "Customer");
+ tokenizer.setColumns(new Range(1, 12),
+ new Range(13, 15),
+ new Range(16, 20),
+ new Range(21, 29));
+
+ return tokenizer;
+}
+```
+
+Because the `FixedLengthLineTokenizer` uses the same `LineTokenizer` interface as
+discussed above, it returns the same `FieldSet` as if a delimiter had been used. This
+lets the same approaches be used in handling its output, such as using the`BeanWrapperFieldSetMapper`.
+
+##### Multiple Record Types within a Single File
+
+All of the file reading examples up to this point have all made a key assumption for
+simplicity’s sake: all of the records in a file have the same format. However, this may
+not always be the case. It is very common that a file might have records with different
+formats that need to be tokenized differently and mapped to different objects. The
+following excerpt from a file illustrates this:
+
+```
+USER;Smith;Peter;;T;20014539;F
+LINEA;1044391041ABC037.49G201XX1383.12H
+LINEB;2134776319DEF422.99M005LI
+```
+
+In this file we have three types of records, "USER", "LINEA", and "LINEB". A "USER" line
+corresponds to a `User` object. "LINEA" and "LINEB" both correspond to `Line` objects,
+though a "LINEA" has more information than a "LINEB".
+
+The `ItemReader` reads each line individually, but we must specify different`LineTokenizer` and `FieldSetMapper` objects so that the `ItemWriter` receives the
+correct items. The `PatternMatchingCompositeLineMapper` makes this easy by allowing maps
+of patterns to `LineTokenizers` and patterns to `FieldSetMappers` to be configured.
+
+The following example shows how to define ranges for the `FixedLengthLineTokenizer` in
+XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Java Configuration
+
+```
+@Bean
+public PatternMatchingCompositeLineMapper orderFileLineMapper() {
+ PatternMatchingCompositeLineMapper lineMapper =
+ new PatternMatchingCompositeLineMapper();
+
+ Map tokenizers = new HashMap<>(3);
+ tokenizers.put("USER*", userTokenizer());
+ tokenizers.put("LINEA*", lineATokenizer());
+ tokenizers.put("LINEB*", lineBTokenizer());
+
+ lineMapper.setTokenizers(tokenizers);
+
+ Map mappers = new HashMap<>(2);
+ mappers.put("USER*", userFieldSetMapper());
+ mappers.put("LINE*", lineFieldSetMapper());
+
+ lineMapper.setFieldSetMappers(mappers);
+
+ return lineMapper;
+}
+```
+
+In this example, "LINEA" and "LINEB" have separate `LineTokenizer` instances, but they both use
+the same `FieldSetMapper`.
+
+The `PatternMatchingCompositeLineMapper` uses the `PatternMatcher#match` method
+in order to select the correct delegate for each line. The `PatternMatcher` allows for
+two wildcard characters with special meaning: the question mark ("?") matches exactly one
+character, while the asterisk ("\*") matches zero or more characters. Note that, in the
+preceding configuration, all patterns end with an asterisk, making them effectively
+prefixes to lines. The `PatternMatcher` always matches the most specific pattern
+possible, regardless of the order in the configuration. So if "LINE\*" and "LINEA\*" were
+both listed as patterns, "LINEA" would match pattern "LINEA\*", while "LINEB" would match
+pattern "LINE\*". Additionally, a single asterisk ("\*") can serve as a default by matching
+any line not matched by any other pattern.
+
+The following example shows how to match a line not matched by any other pattern in XML:
+
+XML Configuration
+
+```
+
+```
+
+The following example shows how to match a line not matched by any other pattern in Java:
+
+Java Configuration
+
+```
+...
+tokenizers.put("*", defaultLineTokenizer());
+...
+```
+
+There is also a `PatternMatchingCompositeLineTokenizer` that can be used for tokenization
+alone.
+
+It is also common for a flat file to contain records that each span multiple lines. To
+handle this situation, a more complex strategy is required. A demonstration of this
+common pattern can be found in the `multiLineRecords` sample.
+
+##### Exception Handling in Flat Files
+
+There are many scenarios when tokenizing a line may cause exceptions to be thrown. Many
+flat files are imperfect and contain incorrectly formatted records. Many users choose to
+skip these erroneous lines while logging the issue, the original line, and the line
+number. These logs can later be inspected manually or by another batch job. For this
+reason, Spring Batch provides a hierarchy of exceptions for handling parse exceptions:`FlatFileParseException` and `FlatFileFormatException`. `FlatFileParseException` is
+thrown by the `FlatFileItemReader` when any errors are encountered while trying to read a
+file. `FlatFileFormatException` is thrown by implementations of the `LineTokenizer`interface and indicates a more specific error encountered while tokenizing.
+
+###### `IncorrectTokenCountException`
+
+Both `DelimitedLineTokenizer` and `FixedLengthLineTokenizer` have the ability to specify
+column names that can be used for creating a `FieldSet`. However, if the number of column
+names does not match the number of columns found while tokenizing a line, the `FieldSet`cannot be created, and an `IncorrectTokenCountException` is thrown, which contains the
+number of tokens encountered, and the number expected, as shown in the following example:
+
+```
+tokenizer.setNames(new String[] {"A", "B", "C", "D"});
+
+try {
+ tokenizer.tokenize("a,b,c");
+}
+catch (IncorrectTokenCountException e) {
+ assertEquals(4, e.getExpectedCount());
+ assertEquals(3, e.getActualCount());
+}
+```
+
+Because the tokenizer was configured with 4 column names but only 3 tokens were found in
+the file, an `IncorrectTokenCountException` was thrown.
+
+###### `IncorrectLineLengthException`
+
+Files formatted in a fixed-length format have additional requirements when parsing
+because, unlike a delimited format, each column must strictly adhere to its predefined
+width. If the total line length does not equal the widest value of this column, an
+exception is thrown, as shown in the following example:
+
+```
+tokenizer.setColumns(new Range[] { new Range(1, 5),
+ new Range(6, 10),
+ new Range(11, 15) });
+try {
+ tokenizer.tokenize("12345");
+ fail("Expected IncorrectLineLengthException");
+}
+catch (IncorrectLineLengthException ex) {
+ assertEquals(15, ex.getExpectedLength());
+ assertEquals(5, ex.getActualLength());
+}
+```
+
+The configured ranges for the tokenizer above are: 1-5, 6-10, and 11-15. Consequently,
+the total length of the line is 15. However, in the preceding example, a line of length 5
+was passed in, causing an `IncorrectLineLengthException` to be thrown. Throwing an
+exception here rather than only mapping the first column allows the processing of the
+line to fail earlier and with more information than it would contain if it failed while
+trying to read in column 2 in a `FieldSetMapper`. However, there are scenarios where the
+length of the line is not always constant. For this reason, validation of line length can
+be turned off via the 'strict' property, as shown in the following example:
+
+```
+tokenizer.setColumns(new Range[] { new Range(1, 5), new Range(6, 10) });
+tokenizer.setStrict(false);
+FieldSet tokens = tokenizer.tokenize("12345");
+assertEquals("12345", tokens.readString(0));
+assertEquals("", tokens.readString(1));
+```
+
+The preceding example is almost identical to the one before it, except that`tokenizer.setStrict(false)` was called. This setting tells the tokenizer to not enforce
+line lengths when tokenizing the line. A `FieldSet` is now correctly created and
+returned. However, it contains only empty tokens for the remaining values.
+
+#### `FlatFileItemWriter`
+
+Writing out to flat files has the same problems and issues that reading in from a file
+must overcome. A step must be able to write either delimited or fixed length formats in a
+transactional manner.
+
+##### `LineAggregator`
+
+Just as the `LineTokenizer` interface is necessary to take an item and turn it into a`String`, file writing must have a way to aggregate multiple fields into a single string
+for writing to a file. In Spring Batch, this is the `LineAggregator`, shown in the
+following interface definition:
+
+```
+public interface LineAggregator {
+
+ public String aggregate(T item);
+
+}
+```
+
+The `LineAggregator` is the logical opposite of `LineTokenizer`. `LineTokenizer` takes a`String` and returns a `FieldSet`, whereas `LineAggregator` takes an `item` and returns a`String`.
+
+###### `PassThroughLineAggregator`
+
+The most basic implementation of the `LineAggregator` interface is the`PassThroughLineAggregator`, which assumes that the object is already a string or that
+its string representation is acceptable for writing, as shown in the following code:
+
+```
+public class PassThroughLineAggregator implements LineAggregator {
+
+ public String aggregate(T item) {
+ return item.toString();
+ }
+}
+```
+
+The preceding implementation is useful if direct control of creating the string is
+required but the advantages of a `FlatFileItemWriter`, such as transaction and restart
+support, are necessary.
+
+##### Simplified File Writing Example
+
+Now that the `LineAggregator` interface and its most basic implementation,`PassThroughLineAggregator`, have been defined, the basic flow of writing can be
+explained:
+
+1. The object to be written is passed to the `LineAggregator` in order to obtain a`String`.
+
+2. The returned `String` is written to the configured file.
+
+The following excerpt from the `FlatFileItemWriter` expresses this in code:
+
+```
+public void write(T item) throws Exception {
+ write(lineAggregator.aggregate(item) + LINE_SEPARATOR);
+}
+```
+
+In XML, a simple example of configuration might look like the following:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+```
+
+In Java, a simple example of configuration might look like the following:
+
+Java Configuration
+
+```
+@Bean
+public FlatFileItemWriter itemWriter() {
+ return new FlatFileItemWriterBuilder()
+ .name("itemWriter")
+ .resource(new FileSystemResource("target/test-outputs/output.txt"))
+ .lineAggregator(new PassThroughLineAggregator<>())
+ .build();
+}
+```
+
+##### `FieldExtractor`
+
+The preceding example may be useful for the most basic uses of a writing to a file.
+However, most users of the `FlatFileItemWriter` have a domain object that needs to be
+written out and, thus, must be converted into a line. In file reading, the following was
+required:
+
+1. Read one line from the file.
+
+2. Pass the line into the `LineTokenizer#tokenize()` method, in order to retrieve a`FieldSet`.
+
+3. Pass the `FieldSet` returned from tokenizing to a `FieldSetMapper`, returning the
+ result from the `ItemReader#read()` method.
+
+File writing has similar but inverse steps:
+
+1. Pass the item to be written to the writer.
+
+2. Convert the fields on the item into an array.
+
+3. Aggregate the resulting array into a line.
+
+Because there is no way for the framework to know which fields from the object need to
+be written out, a `FieldExtractor` must be written to accomplish the task of turning the
+item into an array, as shown in the following interface definition:
+
+```
+public interface FieldExtractor {
+
+ Object[] extract(T item);
+
+}
+```
+
+Implementations of the `FieldExtractor` interface should create an array from the fields
+of the provided object, which can then be written out with a delimiter between the
+elements or as part of a fixed-width line.
+
+###### `PassThroughFieldExtractor`
+
+There are many cases where a collection, such as an array, `Collection`, or `FieldSet`,
+needs to be written out. "Extracting" an array from one of these collection types is very
+straightforward. To do so, convert the collection to an array. Therefore, the`PassThroughFieldExtractor` should be used in this scenario. It should be noted that, if
+the object passed in is not a type of collection, then the `PassThroughFieldExtractor`returns an array containing solely the item to be extracted.
+
+###### `BeanWrapperFieldExtractor`
+
+As with the `BeanWrapperFieldSetMapper` described in the file reading section, it is
+often preferable to configure how to convert a domain object to an object array, rather
+than writing the conversion yourself. The `BeanWrapperFieldExtractor` provides this
+functionality, as shown in the following example:
+
+```
+BeanWrapperFieldExtractor extractor = new BeanWrapperFieldExtractor<>();
+extractor.setNames(new String[] { "first", "last", "born" });
+
+String first = "Alan";
+String last = "Turing";
+int born = 1912;
+
+Name n = new Name(first, last, born);
+Object[] values = extractor.extract(n);
+
+assertEquals(first, values[0]);
+assertEquals(last, values[1]);
+assertEquals(born, values[2]);
+```
+
+This extractor implementation has only one required property: the names of the fields to
+map. Just as the `BeanWrapperFieldSetMapper` needs field names to map fields on the`FieldSet` to setters on the provided object, the `BeanWrapperFieldExtractor` needs names
+to map to getters for creating an object array. It is worth noting that the order of the
+names determines the order of the fields within the array.
+
+##### Delimited File Writing Example
+
+The most basic flat file format is one in which all fields are separated by a delimiter.
+This can be accomplished using a `DelimitedLineAggregator`. The following example writes
+out a simple domain object that represents a credit to a customer account:
+
+```
+public class CustomerCredit {
+
+ private int id;
+ private String name;
+ private BigDecimal credit;
+
+ //getters and setters removed for clarity
+}
+```
+
+Because a domain object is being used, an implementation of the `FieldExtractor`interface must be provided, along with the delimiter to use.
+
+The following example shows how to use the `FieldExtractor` with a delimiter in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to use the `FieldExtractor` with a delimiter in Java:
+
+Java Configuration
+
+```
+@Bean
+public FlatFileItemWriter itemWriter(Resource outputResource) throws Exception {
+ BeanWrapperFieldExtractor fieldExtractor = new BeanWrapperFieldExtractor<>();
+ fieldExtractor.setNames(new String[] {"name", "credit"});
+ fieldExtractor.afterPropertiesSet();
+
+ DelimitedLineAggregator lineAggregator = new DelimitedLineAggregator<>();
+ lineAggregator.setDelimiter(",");
+ lineAggregator.setFieldExtractor(fieldExtractor);
+
+ return new FlatFileItemWriterBuilder()
+ .name("customerCreditWriter")
+ .resource(outputResource)
+ .lineAggregator(lineAggregator)
+ .build();
+}
+```
+
+In the previous example, the `BeanWrapperFieldExtractor` described earlier in this
+chapter is used to turn the name and credit fields within `CustomerCredit` into an object
+array, which is then written out with commas between each field.
+
+It is also possible to use the `FlatFileItemWriterBuilder.DelimitedBuilder` to
+automatically create the `BeanWrapperFieldExtractor` and `DelimitedLineAggregator`as shown in the following example:
+
+Java Configuration
+
+```
+@Bean
+public FlatFileItemWriter itemWriter(Resource outputResource) throws Exception {
+ return new FlatFileItemWriterBuilder()
+ .name("customerCreditWriter")
+ .resource(outputResource)
+ .delimited()
+ .delimiter("|")
+ .names(new String[] {"name", "credit"})
+ .build();
+}
+```
+
+##### Fixed Width File Writing Example
+
+Delimited is not the only type of flat file format. Many prefer to use a set width for
+each column to delineate between fields, which is usually referred to as 'fixed width'.
+Spring Batch supports this in file writing with the `FormatterLineAggregator`.
+
+Using the same `CustomerCredit` domain object described above, it can be configured as
+follows in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Using the same `CustomerCredit` domain object described above, it can be configured as
+follows in Java:
+
+Java Configuration
+
+```
+@Bean
+public FlatFileItemWriter itemWriter(Resource outputResource) throws Exception {
+ BeanWrapperFieldExtractor fieldExtractor = new BeanWrapperFieldExtractor<>();
+ fieldExtractor.setNames(new String[] {"name", "credit"});
+ fieldExtractor.afterPropertiesSet();
+
+ FormatterLineAggregator lineAggregator = new FormatterLineAggregator<>();
+ lineAggregator.setFormat("%-9s%-2.0f");
+ lineAggregator.setFieldExtractor(fieldExtractor);
+
+ return new FlatFileItemWriterBuilder()
+ .name("customerCreditWriter")
+ .resource(outputResource)
+ .lineAggregator(lineAggregator)
+ .build();
+}
+```
+
+Most of the preceding example should look familiar. However, the value of the format
+property is new.
+
+The following example shows the format property in XML:
+
+```
+
+```
+
+The following example shows the format property in Java:
+
+```
+...
+FormatterLineAggregator lineAggregator = new FormatterLineAggregator<>();
+lineAggregator.setFormat("%-9s%-2.0f");
+...
+```
+
+The underlying implementation is built using the same`Formatter` added as part of Java 5. The Java`Formatter` is based on the`printf` functionality of the C programming
+language. Most details on how to configure a formatter can be found in
+the Javadoc of [Formatter](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html).
+
+It is also possible to use the `FlatFileItemWriterBuilder.FormattedBuilder` to
+automatically create the `BeanWrapperFieldExtractor` and `FormatterLineAggregator`as shown in following example:
+
+Java Configuration
+
+```
+@Bean
+public FlatFileItemWriter itemWriter(Resource outputResource) throws Exception {
+ return new FlatFileItemWriterBuilder()
+ .name("customerCreditWriter")
+ .resource(outputResource)
+ .formatted()
+ .format("%-9s%-2.0f")
+ .names(new String[] {"name", "credit"})
+ .build();
+}
+```
+
+##### Handling File Creation
+
+`FlatFileItemReader` has a very simple relationship with file resources. When the reader
+is initialized, it opens the file (if it exists), and throws an exception if it does not.
+File writing isn’t quite so simple. At first glance, it seems like a similar
+straightforward contract should exist for `FlatFileItemWriter`: If the file already
+exists, throw an exception, and, if it does not, create it and start writing. However,
+potentially restarting a `Job` can cause issues. In normal restart scenarios, the
+contract is reversed: If the file exists, start writing to it from the last known good
+position, and, if it does not, throw an exception. However, what happens if the file name
+for this job is always the same? In this case, you would want to delete the file if it
+exists, unless it’s a restart. Because of this possibility, the `FlatFileItemWriter`contains the property, `shouldDeleteIfExists`. Setting this property to true causes an
+existing file with the same name to be deleted when the writer is opened.
+
+### XML Item Readers and Writers
+
+Spring Batch provides transactional infrastructure for both reading XML records and
+mapping them to Java objects as well as writing Java objects as XML records.
+
+| |Constraints on streaming XML The StAX API is used for I/O, as other standard XML parsing APIs do not fit batch processing requirements (DOM loads the whole input into memory at once and SAX controls the parsing process by allowing the user to provide only callbacks).|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+We need to consider how XML input and output works in Spring Batch. First, there are a
+few concepts that vary from file reading and writing but are common across Spring Batch
+XML processing. With XML processing, instead of lines of records (`FieldSet` instances) that need
+to be tokenized, it is assumed an XML resource is a collection of 'fragments'
+corresponding to individual records, as shown in the following image:
+
+![XML Input](https://docs.spring.io/spring-batch/docs/current/reference/html/images/xmlinput.png)
+
+Figure 1. XML Input
+
+The 'trade' tag is defined as the 'root element' in the scenario above. Everything
+between '\' and '\ ' is considered one 'fragment'. Spring Batch
+uses Object/XML Mapping (OXM) to bind fragments to objects. However, Spring Batch is not
+tied to any particular XML binding technology. Typical use is to delegate to[Spring OXM](https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#oxm), which
+provides uniform abstraction for the most popular OXM technologies. The dependency on
+Spring OXM is optional and you can choose to implement Spring Batch specific interfaces
+if desired. The relationship to the technologies that OXM supports is shown in the
+following image:
+
+![OXM Binding](https://docs.spring.io/spring-batch/docs/current/reference/html/images/oxm-fragments.png)
+
+Figure 2. OXM Binding
+
+With an introduction to OXM and how one can use XML fragments to represent records, we
+can now more closely examine readers and writers.
+
+#### `StaxEventItemReader`
+
+The `StaxEventItemReader` configuration provides a typical setup for the processing of
+records from an XML input stream. First, consider the following set of XML records that
+the `StaxEventItemReader` can process:
+
+```
+
+
+
+ XYZ0001
+ 5
+ 11.39
+ Customer1
+
+
+ XYZ0002
+ 2
+ 72.99
+ Customer2c
+
+
+ XYZ0003
+ 9
+ 99.99
+ Customer3
+
+
+```
+
+To be able to process the XML records, the following is needed:
+
+* Root Element Name: The name of the root element of the fragment that constitutes the
+ object to be mapped. The example configuration demonstrates this with the value of trade.
+
+* Resource: A Spring Resource that represents the file to read.
+
+* `Unmarshaller`: An unmarshalling facility provided by Spring OXM for mapping the XML
+ fragment to an object.
+
+The following example shows how to define a `StaxEventItemReader` that works with a root
+element named `trade`, a resource of `data/iosample/input/input.xml`, and an unmarshaller
+called `tradeMarshaller` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+```
+
+The following example shows how to define a `StaxEventItemReader` that works with a root
+element named `trade`, a resource of `data/iosample/input/input.xml`, and an unmarshaller
+called `tradeMarshaller` in Java:
+
+Java Configuration
+
+```
+@Bean
+public StaxEventItemReader itemReader() {
+ return new StaxEventItemReaderBuilder()
+ .name("itemReader")
+ .resource(new FileSystemResource("org/springframework/batch/item/xml/domain/trades.xml"))
+ .addFragmentRootElements("trade")
+ .unmarshaller(tradeMarshaller())
+ .build();
+
+}
+```
+
+Note that, in this example, we have chosen to use an `XStreamMarshaller`, which accepts
+an alias passed in as a map with the first key and value being the name of the fragment
+(that is, a root element) and the object type to bind. Then, similar to a `FieldSet`, the
+names of the other elements that map to fields within the object type are described as
+key/value pairs in the map. In the configuration file, we can use a Spring configuration
+utility to describe the required alias.
+
+The following example shows how to describe the alias in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to describe the alias in Java:
+
+Java Configuration
+
+```
+@Bean
+public XStreamMarshaller tradeMarshaller() {
+ Map aliases = new HashMap<>();
+ aliases.put("trade", Trade.class);
+ aliases.put("price", BigDecimal.class);
+ aliases.put("isin", String.class);
+ aliases.put("customer", String.class);
+ aliases.put("quantity", Long.class);
+
+ XStreamMarshaller marshaller = new XStreamMarshaller();
+
+ marshaller.setAliases(aliases);
+
+ return marshaller;
+}
+```
+
+On input, the reader reads the XML resource until it recognizes that a new fragment is
+about to start. By default, the reader matches the element name to recognize that a new
+fragment is about to start. The reader creates a standalone XML document from the
+fragment and passes the document to a deserializer (typically a wrapper around a Spring
+OXM `Unmarshaller`) to map the XML to a Java object.
+
+In summary, this procedure is analogous to the following Java code, which uses the
+injection provided by the Spring configuration:
+
+```
+StaxEventItemReader xmlStaxEventItemReader = new StaxEventItemReader<>();
+Resource resource = new ByteArrayResource(xmlResource.getBytes());
+
+Map aliases = new HashMap();
+aliases.put("trade","org.springframework.batch.sample.domain.trade.Trade");
+aliases.put("price","java.math.BigDecimal");
+aliases.put("customer","java.lang.String");
+aliases.put("isin","java.lang.String");
+aliases.put("quantity","java.lang.Long");
+XStreamMarshaller unmarshaller = new XStreamMarshaller();
+unmarshaller.setAliases(aliases);
+xmlStaxEventItemReader.setUnmarshaller(unmarshaller);
+xmlStaxEventItemReader.setResource(resource);
+xmlStaxEventItemReader.setFragmentRootElementName("trade");
+xmlStaxEventItemReader.open(new ExecutionContext());
+
+boolean hasNext = true;
+
+Trade trade = null;
+
+while (hasNext) {
+ trade = xmlStaxEventItemReader.read();
+ if (trade == null) {
+ hasNext = false;
+ }
+ else {
+ System.out.println(trade);
+ }
+}
+```
+
+#### `StaxEventItemWriter`
+
+Output works symmetrically to input. The `StaxEventItemWriter` needs a `Resource`, a
+marshaller, and a `rootTagName`. A Java object is passed to a marshaller (typically a
+standard Spring OXM Marshaller) which writes to a `Resource` by using a custom event
+writer that filters the `StartDocument` and `EndDocument` events produced for each
+fragment by the OXM tools.
+
+The following XML example uses the `MarshallingEventWriterSerializer`:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+```
+
+The following Java example uses the `MarshallingEventWriterSerializer`:
+
+Java Configuration
+
+```
+@Bean
+public StaxEventItemWriter itemWriter(Resource outputResource) {
+ return new StaxEventItemWriterBuilder()
+ .name("tradesWriter")
+ .marshaller(tradeMarshaller())
+ .resource(outputResource)
+ .rootTagName("trade")
+ .overwriteOutput(true)
+ .build();
+
+}
+```
+
+The preceding configuration sets up the three required properties and sets the optional`overwriteOutput=true` attrbute, mentioned earlier in this chapter for specifying whether
+an existing file can be overwritten.
+
+The following XML example uses the same marshaller as the one used in the reading example
+shown earlier in the chapter:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following Java example uses the same marshaller as the one used in the reading example
+shown earlier in the chapter:
+
+Java Configuration
+
+```
+@Bean
+public XStreamMarshaller customerCreditMarshaller() {
+ XStreamMarshaller marshaller = new XStreamMarshaller();
+
+ Map aliases = new HashMap<>();
+ aliases.put("trade", Trade.class);
+ aliases.put("price", BigDecimal.class);
+ aliases.put("isin", String.class);
+ aliases.put("customer", String.class);
+ aliases.put("quantity", Long.class);
+
+ marshaller.setAliases(aliases);
+
+ return marshaller;
+}
+```
+
+To summarize with a Java example, the following code illustrates all of the points
+discussed, demonstrating the programmatic setup of the required properties:
+
+```
+FileSystemResource resource = new FileSystemResource("data/outputFile.xml")
+
+Map aliases = new HashMap();
+aliases.put("trade","org.springframework.batch.sample.domain.trade.Trade");
+aliases.put("price","java.math.BigDecimal");
+aliases.put("customer","java.lang.String");
+aliases.put("isin","java.lang.String");
+aliases.put("quantity","java.lang.Long");
+Marshaller marshaller = new XStreamMarshaller();
+marshaller.setAliases(aliases);
+
+StaxEventItemWriter staxItemWriter =
+ new StaxEventItemWriterBuilder()
+ .name("tradesWriter")
+ .marshaller(marshaller)
+ .resource(resource)
+ .rootTagName("trade")
+ .overwriteOutput(true)
+ .build();
+
+staxItemWriter.afterPropertiesSet();
+
+ExecutionContext executionContext = new ExecutionContext();
+staxItemWriter.open(executionContext);
+Trade trade = new Trade();
+trade.setPrice(11.39);
+trade.setIsin("XYZ0001");
+trade.setQuantity(5L);
+trade.setCustomer("Customer1");
+staxItemWriter.write(trade);
+```
+
+### JSON Item Readers And Writers
+
+Spring Batch provides support for reading and Writing JSON resources in the following format:
+
+```
+[
+ {
+ "isin": "123",
+ "quantity": 1,
+ "price": 1.2,
+ "customer": "foo"
+ },
+ {
+ "isin": "456",
+ "quantity": 2,
+ "price": 1.4,
+ "customer": "bar"
+ }
+]
+```
+
+It is assumed that the JSON resource is an array of JSON objects corresponding to
+individual items. Spring Batch is not tied to any particular JSON library.
+
+#### `JsonItemReader`
+
+The `JsonItemReader` delegates JSON parsing and binding to implementations of the`org.springframework.batch.item.json.JsonObjectReader` interface. This interface
+is intended to be implemented by using a streaming API to read JSON objects
+in chunks. Two implementations are currently provided:
+
+* [Jackson](https://github.com/FasterXML/jackson) through the `org.springframework.batch.item.json.JacksonJsonObjectReader`
+
+* [Gson](https://github.com/google/gson) through the `org.springframework.batch.item.json.GsonJsonObjectReader`
+
+To be able to process JSON records, the following is needed:
+
+* `Resource`: A Spring Resource that represents the JSON file to read.
+
+* `JsonObjectReader`: A JSON object reader to parse and bind JSON objects to items
+
+The following example shows how to define a `JsonItemReader` that works with the
+previous JSON resource `org/springframework/batch/item/json/trades.json` and a`JsonObjectReader` based on Jackson:
+
+```
+@Bean
+public JsonItemReader jsonItemReader() {
+ return new JsonItemReaderBuilder()
+ .jsonObjectReader(new JacksonJsonObjectReader<>(Trade.class))
+ .resource(new ClassPathResource("trades.json"))
+ .name("tradeJsonItemReader")
+ .build();
+}
+```
+
+#### `JsonFileItemWriter`
+
+The `JsonFileItemWriter` delegates the marshalling of items to the`org.springframework.batch.item.json.JsonObjectMarshaller` interface. The contract
+of this interface is to take an object and marshall it to a JSON `String`.
+Two implementations are currently provided:
+
+* [Jackson](https://github.com/FasterXML/jackson) through the `org.springframework.batch.item.json.JacksonJsonObjectMarshaller`
+
+* [Gson](https://github.com/google/gson) through the `org.springframework.batch.item.json.GsonJsonObjectMarshaller`
+
+To be able to write JSON records, the following is needed:
+
+* `Resource`: A Spring `Resource` that represents the JSON file to write
+
+* `JsonObjectMarshaller`: A JSON object marshaller to marshall objects to JSON format
+
+The following example shows how to define a `JsonFileItemWriter`:
+
+```
+@Bean
+public JsonFileItemWriter jsonFileItemWriter() {
+ return new JsonFileItemWriterBuilder()
+ .jsonObjectMarshaller(new JacksonJsonObjectMarshaller<>())
+ .resource(new ClassPathResource("trades.json"))
+ .name("tradeJsonFileItemWriter")
+ .build();
+}
+```
+
+### Multi-File Input
+
+It is a common requirement to process multiple files within a single `Step`. Assuming the
+files all have the same formatting, the `MultiResourceItemReader` supports this type of
+input for both XML and flat file processing. Consider the following files in a directory:
+
+```
+file-1.txt file-2.txt ignored.txt
+```
+
+file-1.txt and file-2.txt are formatted the same and, for business reasons, should be
+processed together. The `MultiResourceItemReader` can be used to read in both files by
+using wildcards.
+
+The following example shows how to read files with wildcards in XML:
+
+XML Configuration
+
+```
+
+
+
+
+```
+
+The following example shows how to read files with wildcards in Java:
+
+Java Configuration
+
+```
+@Bean
+public MultiResourceItemReader multiResourceReader() {
+ return new MultiResourceItemReaderBuilder()
+ .delegate(flatFileItemReader())
+ .resources(resources())
+ .build();
+}
+```
+
+The referenced delegate is a simple `FlatFileItemReader`. The above configuration reads
+input from both files, handling rollback and restart scenarios. It should be noted that,
+as with any `ItemReader`, adding extra input (in this case a file) could cause potential
+issues when restarting. It is recommended that batch jobs work with their own individual
+directories until completed successfully.
+
+| |Input resources are ordered by using `MultiResourceItemReader#setComparator(Comparator)`to make sure resource ordering is preserved between job runs in restart scenario.|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+### Database
+
+Like most enterprise application styles, a database is the central storage mechanism for
+batch. However, batch differs from other application styles due to the sheer size of the
+datasets with which the system must work. If a SQL statement returns 1 million rows, the
+result set probably holds all returned results in memory until all rows have been read.
+Spring Batch provides two types of solutions for this problem:
+
+* [Cursor-based `ItemReader` Implementations](#cursorBasedItemReaders)
+
+* [Paging `ItemReader` Implementations](#pagingItemReaders)
+
+#### Cursor-based `ItemReader` Implementations
+
+Using a database cursor is generally the default approach of most batch developers,
+because it is the database’s solution to the problem of 'streaming' relational data. The
+Java `ResultSet` class is essentially an object oriented mechanism for manipulating a
+cursor. A `ResultSet` maintains a cursor to the current row of data. Calling `next` on a`ResultSet` moves this cursor to the next row. The Spring Batch cursor-based `ItemReader`implementation opens a cursor on initialization and moves the cursor forward one row for
+every call to `read`, returning a mapped object that can be used for processing. The`close` method is then called to ensure all resources are freed up. The Spring core`JdbcTemplate` gets around this problem by using the callback pattern to completely map
+all rows in a `ResultSet` and close before returning control back to the method caller.
+However, in batch, this must wait until the step is complete. The following image shows a
+generic diagram of how a cursor-based `ItemReader` works. Note that, while the example
+uses SQL (because SQL is so widely known), any technology could implement the basic
+approach.
+
+![Cursor Example](https://docs.spring.io/spring-batch/docs/current/reference/html/images/cursorExample.png)
+
+Figure 3. Cursor Example
+
+This example illustrates the basic pattern. Given a 'FOO' table, which has three columns:`ID`, `NAME`, and `BAR`, select all rows with an ID greater than 1 but less than 7. This
+puts the beginning of the cursor (row 1) on ID 2. The result of this row should be a
+completely mapped `Foo` object. Calling `read()` again moves the cursor to the next row,
+which is the `Foo` with an ID of 3. The results of these reads are written out after each`read`, allowing the objects to be garbage collected (assuming no instance variables are
+maintaining references to them).
+
+##### `JdbcCursorItemReader`
+
+`JdbcCursorItemReader` is the JDBC implementation of the cursor-based technique. It works
+directly with a `ResultSet` and requires an SQL statement to run against a connection
+obtained from a `DataSource`. The following database schema is used as an example:
+
+```
+CREATE TABLE CUSTOMER (
+ ID BIGINT IDENTITY PRIMARY KEY,
+ NAME VARCHAR(45),
+ CREDIT FLOAT
+);
+```
+
+Many people prefer to use a domain object for each row, so the following example uses an
+implementation of the `RowMapper` interface to map a `CustomerCredit` object:
+
+```
+public class CustomerCreditRowMapper implements RowMapper {
+
+ public static final String ID_COLUMN = "id";
+ public static final String NAME_COLUMN = "name";
+ public static final String CREDIT_COLUMN = "credit";
+
+ public CustomerCredit mapRow(ResultSet rs, int rowNum) throws SQLException {
+ CustomerCredit customerCredit = new CustomerCredit();
+
+ customerCredit.setId(rs.getInt(ID_COLUMN));
+ customerCredit.setName(rs.getString(NAME_COLUMN));
+ customerCredit.setCredit(rs.getBigDecimal(CREDIT_COLUMN));
+
+ return customerCredit;
+ }
+}
+```
+
+Because `JdbcCursorItemReader` shares key interfaces with `JdbcTemplate`, it is useful to
+see an example of how to read in this data with `JdbcTemplate`, in order to contrast it
+with the `ItemReader`. For the purposes of this example, assume there are 1,000 rows in
+the `CUSTOMER` database. The first example uses `JdbcTemplate`:
+
+```
+//For simplicity sake, assume a dataSource has already been obtained
+JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
+List customerCredits = jdbcTemplate.query("SELECT ID, NAME, CREDIT from CUSTOMER",
+ new CustomerCreditRowMapper());
+```
+
+After running the preceding code snippet, the `customerCredits` list contains 1,000`CustomerCredit` objects. In the query method, a connection is obtained from the`DataSource`, the provided SQL is run against it, and the `mapRow` method is called for
+each row in the `ResultSet`. Contrast this with the approach of the`JdbcCursorItemReader`, shown in the following example:
+
+```
+JdbcCursorItemReader itemReader = new JdbcCursorItemReader();
+itemReader.setDataSource(dataSource);
+itemReader.setSql("SELECT ID, NAME, CREDIT from CUSTOMER");
+itemReader.setRowMapper(new CustomerCreditRowMapper());
+int counter = 0;
+ExecutionContext executionContext = new ExecutionContext();
+itemReader.open(executionContext);
+Object customerCredit = new Object();
+while(customerCredit != null){
+ customerCredit = itemReader.read();
+ counter++;
+}
+itemReader.close();
+```
+
+After running the preceding code snippet, the counter equals 1,000. If the code above had
+put the returned `customerCredit` into a list, the result would have been exactly the
+same as with the `JdbcTemplate` example. However, the big advantage of the `ItemReader`is that it allows items to be 'streamed'. The `read` method can be called once, the item
+can be written out by an `ItemWriter`, and then the next item can be obtained with`read`. This allows item reading and writing to be done in 'chunks' and committed
+periodically, which is the essence of high performance batch processing. Furthermore, it
+is easily configured for injection into a Spring Batch `Step`.
+
+The following example shows how to inject an `ItemReader` into a `Step` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+```
+
+The following example shows how to inject an `ItemReader` into a `Step` in Java:
+
+Java Configuration
+
+```
+@Bean
+public JdbcCursorItemReader itemReader() {
+ return new JdbcCursorItemReaderBuilder()
+ .dataSource(this.dataSource)
+ .name("creditReader")
+ .sql("select ID, NAME, CREDIT from CUSTOMER")
+ .rowMapper(new CustomerCreditRowMapper())
+ .build();
+
+}
+```
+
+###### Additional Properties
+
+Because there are so many varying options for opening a cursor in Java, there are many
+properties on the `JdbcCursorItemReader` that can be set, as described in the following
+table:
+
+| ignoreWarnings | Determines whether or not SQLWarnings are logged or cause an exception. The default is `true` (meaning that warnings are logged). |
+|------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| fetchSize | Gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed by the `ResultSet` object used by the`ItemReader`. By default, no hint is given. |
+| maxRows | Sets the limit for the maximum number of rows the underlying `ResultSet` can hold at any one time. |
+| queryTimeout | Sets the number of seconds the driver waits for a `Statement` object to run. If the limit is exceeded, a `DataAccessException` is thrown. (Consult your driver vendor documentation for details). |
+| verifyCursorPosition | Because the same `ResultSet` held by the `ItemReader` is passed to the `RowMapper`, it is possible for users to call `ResultSet.next()` themselves, which could cause issues with the reader’s internal count. Setting this value to `true` causes an exception to be thrown if the cursor position is not the same after the `RowMapper`call as it was before. |
+| saveState | Indicates whether or not the reader’s state should be saved in the`ExecutionContext` provided by `ItemStream#update(ExecutionContext)`. The default is`true`. |
+| driverSupportsAbsolute | Indicates whether the JDBC driver supports setting the absolute row on a `ResultSet`. It is recommended that this is set to `true`for JDBC drivers that support `ResultSet.absolute()`, as it may improve performance, especially if a step fails while working with a large data set. Defaults to `false`. |
+|setUseSharedExtendedConnection|Indicates whether the connection used for the cursor should be used by all other processing, thus sharing the same transaction. If this is set to `false`, then the cursor is opened with its own connection and does not participate in any transactions started for the rest of the step processing. If you set this flag to `true` then you must wrap the DataSource in an`ExtendedConnectionDataSourceProxy` to prevent the connection from being closed and released after each commit. When you set this option to `true`, the statement used to open the cursor is created with both 'READ\_ONLY' and 'HOLD\_CURSORS\_OVER\_COMMIT' options. This allows holding the cursor open over transaction start and commits performed in the step processing. To use this feature, you need a database that supports this and a JDBC driver supporting JDBC 3.0 or later. Defaults to `false`.|
+
+##### `HibernateCursorItemReader`
+
+Just as normal Spring users make important decisions about whether or not to use ORM
+solutions, which affect whether or not they use a `JdbcTemplate` or a`HibernateTemplate`, Spring Batch users have the same options.`HibernateCursorItemReader` is the Hibernate implementation of the cursor technique.
+Hibernate’s usage in batch has been fairly controversial. This has largely been because
+Hibernate was originally developed to support online application styles. However, that
+does not mean it cannot be used for batch processing. The easiest approach for solving
+this problem is to use a `StatelessSession` rather than a standard session. This removes
+all of the caching and dirty checking Hibernate employs and that can cause issues in a
+batch scenario. For more information on the differences between stateless and normal
+hibernate sessions, refer to the documentation of your specific hibernate release. The`HibernateCursorItemReader` lets you declare an HQL statement and pass in a`SessionFactory`, which will pass back one item per call to read in the same basic
+fashion as the `JdbcCursorItemReader`. The following example configuration uses the same
+'customer credit' example as the JDBC reader:
+
+```
+HibernateCursorItemReader itemReader = new HibernateCursorItemReader();
+itemReader.setQueryString("from CustomerCredit");
+//For simplicity sake, assume sessionFactory already obtained.
+itemReader.setSessionFactory(sessionFactory);
+itemReader.setUseStatelessSession(true);
+int counter = 0;
+ExecutionContext executionContext = new ExecutionContext();
+itemReader.open(executionContext);
+Object customerCredit = new Object();
+while(customerCredit != null){
+ customerCredit = itemReader.read();
+ counter++;
+}
+itemReader.close();
+```
+
+This configured `ItemReader` returns `CustomerCredit` objects in the exact same manner
+as described by the `JdbcCursorItemReader`, assuming hibernate mapping files have been
+created correctly for the `Customer` table. The 'useStatelessSession' property defaults
+to true but has been added here to draw attention to the ability to switch it on or off.
+It is also worth noting that the fetch size of the underlying cursor can be set with the`setFetchSize` property. As with `JdbcCursorItemReader`, configuration is
+straightforward.
+
+The following example shows how to inject a Hibernate `ItemReader` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+```
+
+The following example shows how to inject a Hibernate `ItemReader` in Java:
+
+Java Configuration
+
+```
+@Bean
+public HibernateCursorItemReader itemReader(SessionFactory sessionFactory) {
+ return new HibernateCursorItemReaderBuilder()
+ .name("creditReader")
+ .sessionFactory(sessionFactory)
+ .queryString("from CustomerCredit")
+ .build();
+}
+```
+
+##### `StoredProcedureItemReader`
+
+Sometimes it is necessary to obtain the cursor data by using a stored procedure. The`StoredProcedureItemReader` works like the `JdbcCursorItemReader`, except that, instead
+of running a query to obtain a cursor, it runs a stored procedure that returns a cursor.
+The stored procedure can return the cursor in three different ways:
+
+* As a returned `ResultSet` (used by SQL Server, Sybase, DB2, Derby, and MySQL).
+
+* As a ref-cursor returned as an out parameter (used by Oracle and PostgreSQL).
+
+* As the return value of a stored function call.
+
+The following XML example configuration uses the same 'customer credit' example as earlier
+examples:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+```
+
+The following Java example configuration uses the same 'customer credit' example as
+earlier examples:
+
+Java Configuration
+
+```
+@Bean
+public StoredProcedureItemReader reader(DataSource dataSource) {
+ StoredProcedureItemReader reader = new StoredProcedureItemReader();
+
+ reader.setDataSource(dataSource);
+ reader.setProcedureName("sp_customer_credit");
+ reader.setRowMapper(new CustomerCreditRowMapper());
+
+ return reader;
+}
+```
+
+The preceding example relies on the stored procedure to provide a `ResultSet` as a
+returned result (option 1 from earlier).
+
+If the stored procedure returned a `ref-cursor` (option 2), then we would need to provide
+the position of the out parameter that is the returned `ref-cursor`.
+
+The following example shows how to work with the first parameter being a ref-cursor in
+XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to work with the first parameter being a ref-cursor in
+Java:
+
+Java Configuration
+
+```
+@Bean
+public StoredProcedureItemReader reader(DataSource dataSource) {
+ StoredProcedureItemReader reader = new StoredProcedureItemReader();
+
+ reader.setDataSource(dataSource);
+ reader.setProcedureName("sp_customer_credit");
+ reader.setRowMapper(new CustomerCreditRowMapper());
+ reader.setRefCursorPosition(1);
+
+ return reader;
+}
+```
+
+If the cursor was returned from a stored function (option 3), we would need to set the
+property "function" to `true`. It defaults to `false`.
+
+The following example shows property to `true` in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+```
+
+The following example shows property to `true` in Java:
+
+Java Configuration
+
+```
+@Bean
+public StoredProcedureItemReader reader(DataSource dataSource) {
+ StoredProcedureItemReader reader = new StoredProcedureItemReader();
+
+ reader.setDataSource(dataSource);
+ reader.setProcedureName("sp_customer_credit");
+ reader.setRowMapper(new CustomerCreditRowMapper());
+ reader.setFunction(true);
+
+ return reader;
+}
+```
+
+In all of these cases, we need to define a `RowMapper` as well as a `DataSource` and the
+actual procedure name.
+
+If the stored procedure or function takes in parameters, then they must be declared and
+set by using the `parameters` property. The following example, for Oracle, declares three
+parameters. The first one is the `out` parameter that returns the ref-cursor, and the
+second and third are in parameters that takes a value of type `INTEGER`.
+
+The following example shows how to work with parameters in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following example shows how to work with parameters in Java:
+
+Java Configuration
+
+```
+@Bean
+public StoredProcedureItemReader reader(DataSource dataSource) {
+ List parameters = new ArrayList<>();
+ parameters.add(new SqlOutParameter("newId", OracleTypes.CURSOR));
+ parameters.add(new SqlParameter("amount", Types.INTEGER);
+ parameters.add(new SqlParameter("custId", Types.INTEGER);
+
+ StoredProcedureItemReader reader = new StoredProcedureItemReader();
+
+ reader.setDataSource(dataSource);
+ reader.setProcedureName("spring.cursor_func");
+ reader.setParameters(parameters);
+ reader.setRefCursorPosition(1);
+ reader.setRowMapper(rowMapper());
+ reader.setPreparedStatementSetter(parameterSetter());
+
+ return reader;
+}
+```
+
+In addition to the parameter declarations, we need to specify a `PreparedStatementSetter`implementation that sets the parameter values for the call. This works the same as for
+the `JdbcCursorItemReader` above. All the additional properties listed in[Additional Properties](#JdbcCursorItemReaderProperties) apply to the `StoredProcedureItemReader` as well.
+
+#### Paging `ItemReader` Implementations
+
+An alternative to using a database cursor is running multiple queries where each query
+fetches a portion of the results. We refer to this portion as a page. Each query must
+specify the starting row number and the number of rows that we want returned in the page.
+
+##### `JdbcPagingItemReader`
+
+One implementation of a paging `ItemReader` is the `JdbcPagingItemReader`. The`JdbcPagingItemReader` needs a `PagingQueryProvider` responsible for providing the SQL
+queries used to retrieve the rows making up a page. Since each database has its own
+strategy for providing paging support, we need to use a different `PagingQueryProvider`for each supported database type. There is also the `SqlPagingQueryProviderFactoryBean`that auto-detects the database that is being used and determine the appropriate`PagingQueryProvider` implementation. This simplifies the configuration and is the
+recommended best practice.
+
+The `SqlPagingQueryProviderFactoryBean` requires that you specify a `select` clause and a`from` clause. You can also provide an optional `where` clause. These clauses and the
+required `sortKey` are used to build an SQL statement.
+
+| |It is important to have a unique key constraint on the `sortKey` to guarantee that no data is lost between executions.|
+|---|--------------------------------------------------------------------------------------------------------------------------|
+
+After the reader has been opened, it passes back one item per call to `read` in the same
+basic fashion as any other `ItemReader`. The paging happens behind the scenes when
+additional rows are needed.
+
+The following XML example configuration uses a similar 'customer credit' example as the
+cursor-based `ItemReaders` shown previously:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The following Java example configuration uses a similar 'customer credit' example as the
+cursor-based `ItemReaders` shown previously:
+
+Java Configuration
+
+```
+@Bean
+public JdbcPagingItemReader itemReader(DataSource dataSource, PagingQueryProvider queryProvider) {
+ Map parameterValues = new HashMap<>();
+ parameterValues.put("status", "NEW");
+
+ return new JdbcPagingItemReaderBuilder()
+ .name("creditReader")
+ .dataSource(dataSource)
+ .queryProvider(queryProvider)
+ .parameterValues(parameterValues)
+ .rowMapper(customerCreditMapper())
+ .pageSize(1000)
+ .build();
+}
+
+@Bean
+public SqlPagingQueryProviderFactoryBean queryProvider() {
+ SqlPagingQueryProviderFactoryBean provider = new SqlPagingQueryProviderFactoryBean();
+
+ provider.setSelectClause("select id, name, credit");
+ provider.setFromClause("from customer");
+ provider.setWhereClause("where status=:status");
+ provider.setSortKey("id");
+
+ return provider;
+}
+```
+
+This configured `ItemReader` returns `CustomerCredit` objects using the `RowMapper`,
+which must be specified. The 'pageSize' property determines the number of entities read
+from the database for each query run.
+
+The 'parameterValues' property can be used to specify a `Map` of parameter values for the
+query. If you use named parameters in the `where` clause, the key for each entry should
+match the name of the named parameter. If you use a traditional '?' placeholder, then the
+key for each entry should be the number of the placeholder, starting with 1.
+
+##### `JpaPagingItemReader`
+
+Another implementation of a paging `ItemReader` is the `JpaPagingItemReader`. JPA does
+not have a concept similar to the Hibernate `StatelessSession`, so we have to use other
+features provided by the JPA specification. Since JPA supports paging, this is a natural
+choice when it comes to using JPA for batch processing. After each page is read, the
+entities become detached and the persistence context is cleared, to allow the entities to
+be garbage collected once the page is processed.
+
+The `JpaPagingItemReader` lets you declare a JPQL statement and pass in a`EntityManagerFactory`. It then passes back one item per call to read in the same basic
+fashion as any other `ItemReader`. The paging happens behind the scenes when additional
+entities are needed.
+
+The following XML example configuration uses the same 'customer credit' example as the
+JDBC reader shown previously:
+
+XML Configuration
+
+```
+
+
+
+
+
+```
+
+The following Java example configuration uses the same 'customer credit' example as the
+JDBC reader shown previously:
+
+Java Configuration
+
+```
+@Bean
+public JpaPagingItemReader itemReader() {
+ return new JpaPagingItemReaderBuilder()
+ .name("creditReader")
+ .entityManagerFactory(entityManagerFactory())
+ .queryString("select c from CustomerCredit c")
+ .pageSize(1000)
+ .build();
+}
+```
+
+This configured `ItemReader` returns `CustomerCredit` objects in the exact same manner as
+described for the `JdbcPagingItemReader` above, assuming the `CustomerCredit` object has the
+correct JPA annotations or ORM mapping file. The 'pageSize' property determines the
+number of entities read from the database for each query execution.
+
+#### Database ItemWriters
+
+While both flat files and XML files have a specific `ItemWriter` instance, there is no exact equivalent
+in the database world. This is because transactions provide all the needed functionality.`ItemWriter` implementations are necessary for files because they must act as if they’re transactional,
+keeping track of written items and flushing or clearing at the appropriate times.
+Databases have no need for this functionality, since the write is already contained in a
+transaction. Users can create their own DAOs that implement the `ItemWriter` interface or
+use one from a custom `ItemWriter` that’s written for generic processing concerns. Either
+way, they should work without any issues. One thing to look out for is the performance
+and error handling capabilities that are provided by batching the outputs. This is most
+common when using hibernate as an `ItemWriter` but could have the same issues when using
+JDBC batch mode. Batching database output does not have any inherent flaws, assuming we
+are careful to flush and there are no errors in the data. However, any errors while
+writing can cause confusion, because there is no way to know which individual item caused
+an exception or even if any individual item was responsible, as illustrated in the
+following image:
+
+![Error On Flush](https://docs.spring.io/spring-batch/docs/current/reference/html/images/errorOnFlush.png)
+
+Figure 4. Error On Flush
+
+If items are buffered before being written, any errors are not thrown until the buffer is
+flushed just before a commit. For example, assume that 20 items are written per chunk,
+and the 15th item throws a `DataIntegrityViolationException`. As far as the `Step`is concerned, all 20 item are written successfully, since there is no way to know that an
+error occurs until they are actually written. Once `Session#flush()` is called, the
+buffer is emptied and the exception is hit. At this point, there is nothing the `Step`can do. The transaction must be rolled back. Normally, this exception might cause the
+item to be skipped (depending upon the skip/retry policies), and then it is not written
+again. However, in the batched scenario, there is no way to know which item caused the
+issue. The whole buffer was being written when the failure happened. The only way to
+solve this issue is to flush after each item, as shown in the following image:
+
+![Error On Write](https://docs.spring.io/spring-batch/docs/current/reference/html/images/errorOnWrite.png)
+
+Figure 5. Error On Write
+
+This is a common use case, especially when using Hibernate, and the simple guideline for
+implementations of `ItemWriter` is to flush on each call to `write()`. Doing so allows
+for items to be skipped reliably, with Spring Batch internally taking care of the
+granularity of the calls to `ItemWriter` after an error.
+
+### Reusing Existing Services
+
+Batch systems are often used in conjunction with other application styles. The most
+common is an online system, but it may also support integration or even a thick client
+application by moving necessary bulk data that each application style uses. For this
+reason, it is common that many users want to reuse existing DAOs or other services within
+their batch jobs. The Spring container itself makes this fairly easy by allowing any
+necessary class to be injected. However, there may be cases where the existing service
+needs to act as an `ItemReader` or `ItemWriter`, either to satisfy the dependency of
+another Spring Batch class or because it truly is the main `ItemReader` for a step. It is
+fairly trivial to write an adapter class for each service that needs wrapping, but
+because it is such a common concern, Spring Batch provides implementations:`ItemReaderAdapter` and `ItemWriterAdapter`. Both classes implement the standard Spring
+method by invoking the delegate pattern and are fairly simple to set up.
+
+The following XML example uses the `ItemReaderAdapter`:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+```
+
+The following Java example uses the `ItemReaderAdapter`:
+
+Java Configuration
+
+```
+@Bean
+public ItemReaderAdapter itemReader() {
+ ItemReaderAdapter reader = new ItemReaderAdapter();
+
+ reader.setTargetObject(fooService());
+ reader.setTargetMethod("generateFoo");
+
+ return reader;
+}
+
+@Bean
+public FooService fooService() {
+ return new FooService();
+}
+```
+
+One important point to note is that the contract of the `targetMethod` must be the same
+as the contract for `read`: When exhausted, it returns `null`. Otherwise, it returns an`Object`. Anything else prevents the framework from knowing when processing should end,
+either causing an infinite loop or incorrect failure, depending upon the implementation
+of the `ItemWriter`.
+
+The following XML example uses the `ItemWriterAdapter`:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+```
+
+The following Java example uses the `ItemWriterAdapter`:
+
+Java Configuration
+
+```
+@Bean
+public ItemWriterAdapter itemWriter() {
+ ItemWriterAdapter writer = new ItemWriterAdapter();
+
+ writer.setTargetObject(fooService());
+ writer.setTargetMethod("processFoo");
+
+ return writer;
+}
+
+@Bean
+public FooService fooService() {
+ return new FooService();
+}
+```
+
+### Preventing State Persistence
+
+By default, all of the `ItemReader` and `ItemWriter` implementations store their current
+state in the `ExecutionContext` before it is committed. However, this may not always be
+the desired behavior. For example, many developers choose to make their database readers
+'rerunnable' by using a process indicator. An extra column is added to the input data to
+indicate whether or not it has been processed. When a particular record is being read (or
+written) the processed flag is flipped from `false` to `true`. The SQL statement can then
+contain an extra statement in the `where` clause, such as `where PROCESSED_IND = false`,
+thereby ensuring that only unprocessed records are returned in the case of a restart. In
+this scenario, it is preferable to not store any state, such as the current row number,
+since it is irrelevant upon restart. For this reason, all readers and writers include the
+'saveState' property.
+
+The following bean definition shows how to prevent state persistence in XML:
+
+XML Configuration
+
+```
+
+
+
+
+
+
+
+
+ SELECT games.player_id, games.year_no, SUM(COMPLETES),
+ SUM(ATTEMPTS), SUM(PASSING_YARDS), SUM(PASSING_TD),
+ SUM(INTERCEPTIONS), SUM(RUSHES), SUM(RUSH_YARDS),
+ SUM(RECEPTIONS), SUM(RECEPTIONS_YARDS), SUM(TOTAL_TD)
+ from games, players where players.player_id =
+ games.player_id group by games.player_id, games.year_no
+
+
+
+```
+
+The following bean definition shows how to prevent state persistence in Java:
+
+Java Configuration
+
+```
+@Bean
+public JdbcCursorItemReader playerSummarizationSource(DataSource dataSource) {
+ return new JdbcCursorItemReaderBuilder()
+ .dataSource(dataSource)
+ .rowMapper(new PlayerSummaryMapper())
+ .saveState(false)
+ .sql("SELECT games.player_id, games.year_no, SUM(COMPLETES),"
+ + "SUM(ATTEMPTS), SUM(PASSING_YARDS), SUM(PASSING_TD),"
+ + "SUM(INTERCEPTIONS), SUM(RUSHES), SUM(RUSH_YARDS),"
+ + "SUM(RECEPTIONS), SUM(RECEPTIONS_YARDS), SUM(TOTAL_TD)"
+ + "from games, players where players.player_id ="
+ + "games.player_id group by games.player_id, games.year_no")
+ .build();
+
+}
+```
+
+The `ItemReader` configured above does not make any entries in the `ExecutionContext` for
+any executions in which it participates.
+
+### Creating Custom ItemReaders and ItemWriters
+
+So far, this chapter has discussed the basic contracts of reading and writing in Spring
+Batch and some common implementations for doing so. However, these are all fairly
+generic, and there are many potential scenarios that may not be covered by out-of-the-box
+implementations. This section shows, by using a simple example, how to create a custom`ItemReader` and `ItemWriter` implementation and implement their contracts correctly. The`ItemReader` also implements `ItemStream`, in order to illustrate how to make a reader or
+writer restartable.
+
+#### Custom `ItemReader` Example
+
+For the purpose of this example, we create a simple `ItemReader` implementation that
+reads from a provided list. We start by implementing the most basic contract of`ItemReader`, the `read` method, as shown in the following code:
+
+```
+public class CustomItemReader implements ItemReader {
+
+ List items;
+
+ public CustomItemReader(List items) {
+ this.items = items;
+ }
+
+ public T read() throws Exception, UnexpectedInputException,
+ NonTransientResourceException, ParseException {
+
+ if (!items.isEmpty()) {
+ return items.remove(0);
+ }
+ return null;
+ }
+}
+```
+
+The preceding class takes a list of items and returns them one at a time, removing each
+from the list. When the list is empty, it returns `null`, thus satisfying the most basic
+requirements of an `ItemReader`, as illustrated in the following test code:
+
+```
+List items = new ArrayList<>();
+items.add("1");
+items.add("2");
+items.add("3");
+
+ItemReader itemReader = new CustomItemReader<>(items);
+assertEquals("1", itemReader.read());
+assertEquals("2", itemReader.read());
+assertEquals("3", itemReader.read());
+assertNull(itemReader.read());
+```
+
+##### Making the `ItemReader` Restartable
+
+The final challenge is to make the `ItemReader` restartable. Currently, if processing is
+interrupted and begins again, the `ItemReader` must start at the beginning. This is
+actually valid in many scenarios, but it is sometimes preferable that a batch job
+restarts where it left off. The key discriminant is often whether the reader is stateful
+or stateless. A stateless reader does not need to worry about restartability, but a
+stateful one has to try to reconstitute its last known state on restart. For this reason,
+we recommend that you keep custom readers stateless if possible, so you need not worry
+about restartability.
+
+If you do need to store state, then the `ItemStream` interface should be used:
+
+```
+public class CustomItemReader implements ItemReader, ItemStream {
+
+ List items;
+ int currentIndex = 0;
+ private static final String CURRENT_INDEX = "current.index";
+
+ public CustomItemReader(List items) {
+ this.items = items;
+ }
+
+ public T read() throws Exception, UnexpectedInputException,
+ ParseException, NonTransientResourceException {
+
+ if (currentIndex < items.size()) {
+ return items.get(currentIndex++);
+ }
+
+ return null;
+ }
+
+ public void open(ExecutionContext executionContext) throws ItemStreamException {
+ if (executionContext.containsKey(CURRENT_INDEX)) {
+ currentIndex = new Long(executionContext.getLong(CURRENT_INDEX)).intValue();
+ }
+ else {
+ currentIndex = 0;
+ }
+ }
+
+ public void update(ExecutionContext executionContext) throws ItemStreamException {
+ executionContext.putLong(CURRENT_INDEX, new Long(currentIndex).longValue());
+ }
+
+ public void close() throws ItemStreamException {}
+}
+```
+
+On each call to the `ItemStream` `update` method, the current index of the `ItemReader`is stored in the provided `ExecutionContext` with a key of 'current.index'. When the`ItemStream` `open` method is called, the `ExecutionContext` is checked to see if it
+contains an entry with that key. If the key is found, then the current index is moved to
+that location. This is a fairly trivial example, but it still meets the general contract:
+
+```
+ExecutionContext executionContext = new ExecutionContext();
+((ItemStream)itemReader).open(executionContext);
+assertEquals("1", itemReader.read());
+((ItemStream)itemReader).update(executionContext);
+
+List items = new ArrayList<>();
+items.add("1");
+items.add("2");
+items.add("3");
+itemReader = new CustomItemReader<>(items);
+
+((ItemStream)itemReader).open(executionContext);
+assertEquals("2", itemReader.read());
+```
+
+Most `ItemReaders` have much more sophisticated restart logic. The`JdbcCursorItemReader`, for example, stores the row ID of the last processed row in the
+cursor.
+
+It is also worth noting that the key used within the `ExecutionContext` should not be
+trivial. That is because the same `ExecutionContext` is used for all `ItemStreams` within
+a `Step`. In most cases, simply prepending the key with the class name should be enough
+to guarantee uniqueness. However, in the rare cases where two of the same type of`ItemStream` are used in the same step (which can happen if two files are needed for
+output), a more unique name is needed. For this reason, many of the Spring Batch`ItemReader` and `ItemWriter` implementations have a `setName()` property that lets this
+key name be overridden.
+
+#### Custom `ItemWriter` Example
+
+Implementing a Custom `ItemWriter` is similar in many ways to the `ItemReader` example
+above but differs in enough ways as to warrant its own example. However, adding
+restartability is essentially the same, so it is not covered in this example. As with the`ItemReader` example, a `List` is used in order to keep the example as simple as
+possible:
+
+```
+public class CustomItemWriter implements ItemWriter {
+
+ List output = TransactionAwareProxyFactory.createTransactionalList();
+
+ public void write(List extends T> items) throws Exception {
+ output.addAll(items);
+ }
+
+ public List getOutput() {
+ return output;
+ }
+}
+```
+
+##### Making the `ItemWriter` Restartable
+
+To make the `ItemWriter` restartable, we would follow the same process as for the`ItemReader`, adding and implementing the `ItemStream` interface to synchronize the
+execution context. In the example, we might have to count the number of items processed
+and add that as a footer record. If we needed to do that, we could implement`ItemStream` in our `ItemWriter` so that the counter was reconstituted from the execution
+context if the stream was re-opened.
+
+In many realistic cases, custom `ItemWriters` also delegate to another writer that itself
+is restartable (for example, when writing to a file), or else it writes to a
+transactional resource and so does not need to be restartable, because it is stateless.
+When you have a stateful writer you should probably be sure to implement `ItemStream` as
+well as `ItemWriter`. Remember also that the client of the writer needs to be aware of
+the `ItemStream`, so you may need to register it as a stream in the configuration.
+
+### Item Reader and Writer Implementations
+
+In this section, we will introduce you to readers and writers that have not already been
+discussed in the previous sections.
+
+#### Decorators
+
+In some cases, a user needs specialized behavior to be appended to a pre-existing`ItemReader`. Spring Batch offers some out of the box decorators that can add
+additional behavior to to your `ItemReader` and `ItemWriter` implementations.
+
+Spring Batch includes the following decorators:
+
+* [`SynchronizedItemStreamReader`](#synchronizedItemStreamReader)
+
+* [`SingleItemPeekableItemReader`](#singleItemPeekableItemReader)
+
+* [`SynchronizedItemStreamWriter`](#synchronizedItemStreamWriter)
+
+* [`MultiResourceItemWriter`](#multiResourceItemWriter)
+
+* [`ClassifierCompositeItemWriter`](#classifierCompositeItemWriter)
+
+* [`ClassifierCompositeItemProcessor`](#classifierCompositeItemProcessor)
+
+##### `SynchronizedItemStreamReader`
+
+When using an `ItemReader` that is not thread safe, Spring Batch offers the`SynchronizedItemStreamReader` decorator, which can be used to make the `ItemReader`thread safe. Spring Batch provides a `SynchronizedItemStreamReaderBuilder` to construct
+an instance of the `SynchronizedItemStreamReader`.
+
+##### `SingleItemPeekableItemReader`
+
+Spring Batch includes a decorator that adds a peek method to an `ItemReader`. This peek
+method lets the user peek one item ahead. Repeated calls to the peek returns the same
+item, and this is the next item returned from the `read` method. Spring Batch provides a`SingleItemPeekableItemReaderBuilder` to construct an instance of the`SingleItemPeekableItemReader`.
+
+| |SingleItemPeekableItemReader’s peek method is not thread-safe, because it would not be possible to honor the peek in multiple threads. Only one of the threads that peeked would get that item in the next call to read.|
+|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+##### `SynchronizedItemStreamWriter`
+
+When using an `ItemWriter` that is not thread safe, Spring Batch offers the`SynchronizedItemStreamWriter` decorator, which can be used to make the `ItemWriter`thread safe. Spring Batch provides a `SynchronizedItemStreamWriterBuilder` to construct
+an instance of the `SynchronizedItemStreamWriter`.
+
+##### `MultiResourceItemWriter`
+
+The `MultiResourceItemWriter` wraps a `ResourceAwareItemWriterItemStream` and creates a new
+output resource when the count of items written in the current resource exceeds the`itemCountLimitPerResource`. Spring Batch provides a `MultiResourceItemWriterBuilder` to
+construct an instance of the `MultiResourceItemWriter`.
+
+##### `ClassifierCompositeItemWriter`
+
+The `ClassifierCompositeItemWriter` calls one of a collection of `ItemWriter`implementations for each item, based on a router pattern implemented through the provided`Classifier`. The implementation is thread-safe if all delegates are thread-safe. Spring
+Batch provides a `ClassifierCompositeItemWriterBuilder` to construct an instance of the`ClassifierCompositeItemWriter`.
+
+##### `ClassifierCompositeItemProcessor`
+
+The `ClassifierCompositeItemProcessor` is an `ItemProcessor` that calls one of a
+collection of `ItemProcessor` implementations, based on a router pattern implemented
+through the provided `Classifier`. Spring Batch provides a`ClassifierCompositeItemProcessorBuilder` to construct an instance of the`ClassifierCompositeItemProcessor`.
+
+#### Messaging Readers And Writers
+
+Spring Batch offers the following readers and writers for commonly used messaging systems:
+
+* [`AmqpItemReader`](#amqpItemReader)
+
+* [`AmqpItemWriter`](#amqpItemWriter)
+
+* [`JmsItemReader`](#jmsItemReader)
+
+* [`JmsItemWriter`](#jmsItemWriter)
+
+* [`KafkaItemReader`](#kafkaItemReader)
+
+* [`KafkaItemWriter`](#kafkaItemWriter)
+
+##### `AmqpItemReader`
+
+The `AmqpItemReader` is an `ItemReader` that uses an `AmqpTemplate` to receive or convert
+messages from an exchange. Spring Batch provides a `AmqpItemReaderBuilder` to construct
+an instance of the `AmqpItemReader`.
+
+##### `AmqpItemWriter`
+
+The `AmqpItemWriter` is an `ItemWriter` that uses an `AmqpTemplate` to send messages to
+an AMQP exchange. Messages are sent to the nameless exchange if the name not specified in
+the provided `AmqpTemplate`. Spring Batch provides an `AmqpItemWriterBuilder` to
+construct an instance of the `AmqpItemWriter`.
+
+##### `JmsItemReader`
+
+The `JmsItemReader` is an `ItemReader` for JMS that uses a `JmsTemplate`. The template
+should have a default destination, which is used to provide items for the `read()`method. Spring Batch provides a `JmsItemReaderBuilder` to construct an instance of the`JmsItemReader`.
+
+##### `JmsItemWriter`
+
+The `JmsItemWriter` is an `ItemWriter` for JMS that uses a `JmsTemplate`. The template
+should have a default destination, which is used to send items in `write(List)`. Spring
+Batch provides a `JmsItemWriterBuilder` to construct an instance of the `JmsItemWriter`.
+
+##### `KafkaItemReader`
+
+The `KafkaItemReader` is an `ItemReader` for an Apache Kafka topic. It can be configured
+to read messages from multiple partitions of the same topic. It stores message offsets
+in the execution context to support restart capabilities. Spring Batch provides a`KafkaItemReaderBuilder` to construct an instance of the `KafkaItemReader`.
+
+##### `KafkaItemWriter`
+
+The `KafkaItemWriter` is an `ItemWriter` for Apache Kafka that uses a `KafkaTemplate` to
+send events to a default topic. Spring Batch provides a `KafkaItemWriterBuilder` to
+construct an instance of the `KafkaItemWriter`.
+
+#### Database Readers
+
+Spring Batch offers the following database readers:
+
+* [`Neo4jItemReader`](#Neo4jItemReader)
+
+* [`MongoItemReader`](#mongoItemReader)
+
+* [`HibernateCursorItemReader`](#hibernateCursorItemReader)
+
+* [`HibernatePagingItemReader`](#hibernatePagingItemReader)
+
+* [`RepositoryItemReader`](#repositoryItemReader)
+
+##### `Neo4jItemReader`
+
+The `Neo4jItemReader` is an `ItemReader` that reads objects from the graph database Neo4j
+by using a paging technique. Spring Batch provides a `Neo4jItemReaderBuilder` to
+construct an instance of the `Neo4jItemReader`.
+
+##### `MongoItemReader`
+
+The `MongoItemReader` is an `ItemReader` that reads documents from MongoDB by using a
+paging technique. Spring Batch provides a `MongoItemReaderBuilder` to construct an
+instance of the `MongoItemReader`.
+
+##### `HibernateCursorItemReader`
+
+The `HibernateCursorItemReader` is an `ItemStreamReader` for reading database records
+built on top of Hibernate. It executes the HQL query and then, when initialized, iterates
+over the result set as the `read()` method is called, successively returning an object
+corresponding to the current row. Spring Batch provides a`HibernateCursorItemReaderBuilder` to construct an instance of the`HibernateCursorItemReader`.
+
+##### `HibernatePagingItemReader`
+
+The `HibernatePagingItemReader` is an `ItemReader` for reading database records built on
+top of Hibernate and reading only up to a fixed number of items at a time. Spring Batch
+provides a `HibernatePagingItemReaderBuilder` to construct an instance of the`HibernatePagingItemReader`.
+
+##### `RepositoryItemReader`
+
+The `RepositoryItemReader` is an `ItemReader` that reads records by using a`PagingAndSortingRepository`. Spring Batch provides a `RepositoryItemReaderBuilder` to
+construct an instance of the `RepositoryItemReader`.
+
+#### Database Writers
+
+Spring Batch offers the following database writers:
+
+* [`Neo4jItemWriter`](#neo4jItemWriter)
+
+* [`MongoItemWriter`](#mongoItemWriter)
+
+* [`RepositoryItemWriter`](#repositoryItemWriter)
+
+* [`HibernateItemWriter`](#hibernateItemWriter)
+
+* [`JdbcBatchItemWriter`](#jdbcBatchItemWriter)
+
+* [`JpaItemWriter`](#jpaItemWriter)
+
+* [`GemfireItemWriter`](#gemfireItemWriter)
+
+##### `Neo4jItemWriter`
+
+The `Neo4jItemWriter` is an `ItemWriter` implementation that writes to a Neo4j database.
+Spring Batch provides a `Neo4jItemWriterBuilder` to construct an instance of the`Neo4jItemWriter`.
+
+##### `MongoItemWriter`
+
+The `MongoItemWriter` is an `ItemWriter` implementation that writes to a MongoDB store
+using an implementation of Spring Data’s `MongoOperations`. Spring Batch provides a`MongoItemWriterBuilder` to construct an instance of the `MongoItemWriter`.
+
+##### `RepositoryItemWriter`
+
+The `RepositoryItemWriter` is an `ItemWriter` wrapper for a `CrudRepository` from Spring
+Data. Spring Batch provides a `RepositoryItemWriterBuilder` to construct an instance of
+the `RepositoryItemWriter`.
+
+##### `HibernateItemWriter`
+
+The `HibernateItemWriter` is an `ItemWriter` that uses a Hibernate session to save or
+update entities that are not part of the current Hibernate session. Spring Batch provides
+a `HibernateItemWriterBuilder` to construct an instance of the `HibernateItemWriter`.
+
+##### `JdbcBatchItemWriter`
+
+The `JdbcBatchItemWriter` is an `ItemWriter` that uses the batching features from`NamedParameterJdbcTemplate` to execute a batch of statements for all items provided.
+Spring Batch provides a `JdbcBatchItemWriterBuilder` to construct an instance of the`JdbcBatchItemWriter`.
+
+##### `JpaItemWriter`
+
+The `JpaItemWriter` is an `ItemWriter` that uses a JPA `EntityManagerFactory` to merge
+any entities that are not part of the persistence context. Spring Batch provides a`JpaItemWriterBuilder` to construct an instance of the `JpaItemWriter`.
+
+##### `GemfireItemWriter`
+
+The `GemfireItemWriter` is an `ItemWriter` that uses a `GemfireTemplate` that stores
+items in GemFire as key/value pairs. Spring Batch provides a `GemfireItemWriterBuilder`to construct an instance of the `GemfireItemWriter`.
+
+#### Specialized Readers
+
+Spring Batch offers the following specialized readers:
+
+* [`LdifReader`](#ldifReader)
+
+* [`MappingLdifReader`](#mappingLdifReader)
+
+* [`AvroItemReader`](#avroItemReader)
+
+##### `LdifReader`
+
+The `LdifReader` reads LDIF (LDAP Data Interchange Format) records from a `Resource`,
+parses them, and returns a `LdapAttribute` object for each `read` executed. Spring Batch
+provides a `LdifReaderBuilder` to construct an instance of the `LdifReader`.
+
+##### `MappingLdifReader`
+
+The `MappingLdifReader` reads LDIF (LDAP Data Interchange Format) records from a`Resource`, parses them then maps each LDIF record to a POJO (Plain Old Java Object).
+Each read returns a POJO. Spring Batch provides a `MappingLdifReaderBuilder` to construct
+an instance of the `MappingLdifReader`.
+
+##### `AvroItemReader`
+
+The `AvroItemReader` reads serialized Avro data from a Resource.
+Each read returns an instance of the type specified by a Java class or Avro Schema.
+The reader may be optionally configured for input that embeds an Avro schema or not.
+Spring Batch provides an `AvroItemReaderBuilder` to construct an instance of the `AvroItemReader`.
+
+#### Specialized Writers
+
+Spring Batch offers the following specialized writers:
+
+* [`SimpleMailMessageItemWriter`](#simpleMailMessageItemWriter)
+
+* [`AvroItemWriter`](#avroItemWriter)
+
+##### `SimpleMailMessageItemWriter`
+
+The `SimpleMailMessageItemWriter` is an `ItemWriter` that can send mail messages. It
+delegates the actual sending of messages to an instance of `MailSender`. Spring Batch
+provides a `SimpleMailMessageItemWriterBuilder` to construct an instance of the`SimpleMailMessageItemWriter`.
+
+##### `AvroItemWriter`
+
+The `AvroItemWrite` serializes Java objects to a WriteableResource according to the given type or Schema.
+The writer may be optionally configured to embed an Avro schema in the output or not.
+Spring Batch provides an `AvroItemWriterBuilder` to construct an instance of the `AvroItemWriter`.
+
+#### Specialized Processors
+
+Spring Batch offers the following specialized processors:
+
+* [`ScriptItemProcessor`](#scriptItemProcessor)
+
+##### `ScriptItemProcessor`
+
+The `ScriptItemProcessor` is an `ItemProcessor` that passes the current item to process
+to the provided script and the result of the script is returned by the processor. Spring
+Batch provides a `ScriptItemProcessorBuilder` to construct an instance of the`ScriptItemProcessor`.
\ No newline at end of file
diff --git a/docs/en/spring-batch/repeat.md b/docs/en/spring-batch/repeat.md
new file mode 100644
index 0000000000000000000000000000000000000000..faf19b5c82840c13e072014d77186fac80eec54e
--- /dev/null
+++ b/docs/en/spring-batch/repeat.md
@@ -0,0 +1,212 @@
+# Repeat
+
+## Repeat
+
+XMLJavaBoth
+
+### RepeatTemplate
+
+Batch processing is about repetitive actions, either as a simple optimization or as part
+of a job. To strategize and generalize the repetition and to provide what amounts to an
+iterator framework, Spring Batch has the `RepeatOperations` interface. The`RepeatOperations` interface has the following definition:
+
+```
+public interface RepeatOperations {
+
+ RepeatStatus iterate(RepeatCallback callback) throws RepeatException;
+
+}
+```
+
+The callback is an interface, shown in the following definition, that lets you insert
+some business logic to be repeated:
+
+```
+public interface RepeatCallback {
+
+ RepeatStatus doInIteration(RepeatContext context) throws Exception;
+
+}
+```
+
+The callback is executed repeatedly until the implementation determines that the
+iteration should end. The return value in these interfaces is an enumeration that can
+either be `RepeatStatus.CONTINUABLE` or `RepeatStatus.FINISHED`. A `RepeatStatus`enumeration conveys information to the caller of the repeat operations about whether
+there is any more work to do. Generally speaking, implementations of `RepeatOperations`should inspect the `RepeatStatus` and use it as part of the decision to end the
+iteration. Any callback that wishes to signal to the caller that there is no more work to
+do can return `RepeatStatus.FINISHED`.
+
+The simplest general purpose implementation of `RepeatOperations` is `RepeatTemplate`, as
+shown in the following example:
+
+```
+RepeatTemplate template = new RepeatTemplate();
+
+template.setCompletionPolicy(new SimpleCompletionPolicy(2));
+
+template.iterate(new RepeatCallback() {
+
+ public RepeatStatus doInIteration(RepeatContext context) {
+ // Do stuff in batch...
+ return RepeatStatus.CONTINUABLE;
+ }
+
+});
+```
+
+In the preceding example, we return `RepeatStatus.CONTINUABLE`, to show that there is
+more work to do. The callback can also return `RepeatStatus.FINISHED`, to signal to the
+caller that there is no more work to do. Some iterations can be terminated by
+considerations intrinsic to the work being done in the callback. Others are effectively
+infinite loops as far as the callback is concerned and the completion decision is
+delegated to an external policy, as in the case shown in the preceding example.
+
+#### RepeatContext
+
+The method parameter for the `RepeatCallback` is a `RepeatContext`. Many callbacks ignore
+the context. However, if necessary, it can be used as an attribute bag to store transient
+data for the duration of the iteration. After the `iterate` method returns, the context
+no longer exists.
+
+If there is a nested iteration in progress, a `RepeatContext` has a parent context. The
+parent context is occasionally useful for storing data that need to be shared between
+calls to `iterate`. This is the case, for instance, if you want to count the number of
+occurrences of an event in the iteration and remember it across subsequent calls.
+
+#### RepeatStatus
+
+`RepeatStatus` is an enumeration used by Spring Batch to indicate whether processing has
+finished. It has two possible `RepeatStatus` values, described in the following table:
+
+| *Value* | *Description* |
+|-----------|--------------------------------------|
+|CONTINUABLE| There is more work to do. |
+| FINISHED |No more repetitions should take place.|
+
+`RepeatStatus` values can also be combined with a logical AND operation by using the`and()` method in `RepeatStatus`. The effect of this is to do a logical AND on the
+continuable flag. In other words, if either status is `FINISHED`, then the result is`FINISHED`.
+
+### Completion Policies
+
+Inside a `RepeatTemplate`, the termination of the loop in the `iterate` method is
+determined by a `CompletionPolicy`, which is also a factory for the `RepeatContext`. The`RepeatTemplate` has the responsibility to use the current policy to create a`RepeatContext` and pass that in to the `RepeatCallback` at every stage in the iteration.
+After a callback completes its `doInIteration`, the `RepeatTemplate` has to make a call
+to the `CompletionPolicy` to ask it to update its state (which will be stored in the`RepeatContext`). Then it asks the policy if the iteration is complete.
+
+Spring Batch provides some simple general purpose implementations of `CompletionPolicy`.`SimpleCompletionPolicy` allows execution up to a fixed number of times (with`RepeatStatus.FINISHED` forcing early completion at any time).
+
+Users might need to implement their own completion policies for more complicated
+decisions. For example, a batch processing window that prevents batch jobs from executing
+once the online systems are in use would require a custom policy.
+
+### Exception Handling
+
+If there is an exception thrown inside a `RepeatCallback`, the `RepeatTemplate` consults
+an `ExceptionHandler`, which can decide whether or not to re-throw the exception.
+
+The following listing shows the `ExceptionHandler` interface definition:
+
+```
+public interface ExceptionHandler {
+
+ void handleException(RepeatContext context, Throwable throwable)
+ throws Throwable;
+
+}
+```
+
+A common use case is to count the number of exceptions of a given type and fail when a
+limit is reached. For this purpose, Spring Batch provides the`SimpleLimitExceptionHandler` and a slightly more flexible`RethrowOnThresholdExceptionHandler`. The `SimpleLimitExceptionHandler` has a limit
+property and an exception type that should be compared with the current exception. All
+subclasses of the provided type are also counted. Exceptions of the given type are
+ignored until the limit is reached, and then they are rethrown. Exceptions of other types
+are always rethrown.
+
+An important optional property of the `SimpleLimitExceptionHandler` is the boolean flag
+called `useParent`. It is `false` by default, so the limit is only accounted for in the
+current `RepeatContext`. When set to `true`, the limit is kept across sibling contexts in
+a nested iteration (such as a set of chunks inside a step).
+
+### Listeners
+
+Often, it is useful to be able to receive additional callbacks for cross-cutting concerns
+across a number of different iterations. For this purpose, Spring Batch provides the`RepeatListener` interface. The `RepeatTemplate` lets users register `RepeatListener`implementations, and they are given callbacks with the `RepeatContext` and `RepeatStatus`where available during the iteration.
+
+The `RepeatListener` interface has the following definition:
+
+```
+public interface RepeatListener {
+ void before(RepeatContext context);
+ void after(RepeatContext context, RepeatStatus result);
+ void open(RepeatContext context);
+ void onError(RepeatContext context, Throwable e);
+ void close(RepeatContext context);
+}
+```
+
+The `open` and `close` callbacks come before and after the entire iteration. `before`,`after`, and `onError` apply to the individual `RepeatCallback` calls.
+
+Note that, when there is more than one listener, they are in a list, so there is an
+order. In this case, `open` and `before` are called in the same order while `after`,`onError`, and `close` are called in reverse order.
+
+### Parallel Processing
+
+Implementations of `RepeatOperations` are not restricted to executing the callback
+sequentially. It is quite important that some implementations are able to execute their
+callbacks in parallel. To this end, Spring Batch provides the`TaskExecutorRepeatTemplate`, which uses the Spring `TaskExecutor` strategy to run the`RepeatCallback`. The default is to use a `SynchronousTaskExecutor`, which has the effect
+of executing the whole iteration in the same thread (the same as a normal`RepeatTemplate`).
+
+### Declarative Iteration
+
+Sometimes there is some business processing that you know you want to repeat every time
+it happens. The classic example of this is the optimization of a message pipeline. It is
+more efficient to process a batch of messages, if they are arriving frequently, than to
+bear the cost of a separate transaction for every message. Spring Batch provides an AOP
+interceptor that wraps a method call in a `RepeatOperations` object for just this
+purpose. The `RepeatOperationsInterceptor` executes the intercepted method and repeats
+according to the `CompletionPolicy` in the provided `RepeatTemplate`.
+
+The following example shows declarative iteration using the Spring AOP namespace to
+repeat a service call to a method called `processMessage` (for more detail on how to
+configure AOP interceptors, see the Spring User Guide):
+
+```
+
+
+
+
+
+
+```
+
+The following example demonstrates using Java configuration to
+repeat a service call to a method called `processMessage` (for more detail on how to
+configure AOP interceptors, see the Spring User Guide):
+
+```
+@Bean
+public MyService myService() {
+ ProxyFactory factory = new ProxyFactory(RepeatOperations.class.getClassLoader());
+ factory.setInterfaces(MyService.class);
+ factory.setTarget(new MyService());
+
+ MyService service = (MyService) factory.getProxy();
+ JdkRegexpMethodPointcut pointcut = new JdkRegexpMethodPointcut();
+ pointcut.setPatterns(".*processMessage.*");
+
+ RepeatOperationsInterceptor interceptor = new RepeatOperationsInterceptor();
+
+ ((Advised) service).addAdvisor(new DefaultPointcutAdvisor(pointcut, interceptor));
+
+ return service;
+}
+```
+
+The preceding example uses a default `RepeatTemplate` inside the interceptor. To change
+the policies, listeners, and other details, you can inject an instance of`RepeatTemplate` into the interceptor.
+
+If the intercepted method returns `void`, then the interceptor always returns`RepeatStatus.CONTINUABLE` (so there is a danger of an infinite loop if the`CompletionPolicy` does not have a finite end point). Otherwise, it returns`RepeatStatus.CONTINUABLE` until the return value from the intercepted method is `null`,
+at which point it returns `RepeatStatus.FINISHED`. Consequently, the business logic
+inside the target method can signal that there is no more work to do by returning `null`or by throwing an exception that is re-thrown by the `ExceptionHandler` in the provided`RepeatTemplate`.
diff --git a/docs/en/spring-batch/retry.md b/docs/en/spring-batch/retry.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c4eadd02c3cba8a90120ac0a6c10d5ecf7923e2
--- /dev/null
+++ b/docs/en/spring-batch/retry.md
@@ -0,0 +1,312 @@
+# Retry
+
+## Retry
+
+XMLJavaBoth
+
+To make processing more robust and less prone to failure, it sometimes helps to
+automatically retry a failed operation in case it might succeed on a subsequent attempt.
+Errors that are susceptible to intermittent failure are often transient in nature.
+Examples include remote calls to a web service that fails because of a network glitch or a`DeadlockLoserDataAccessException` in a database update.
+
+### `RetryTemplate`
+
+| |The retry functionality was pulled out of Spring Batch as of 2.2.0. It is now part of a new library, [Spring Retry](https://github.com/spring-projects/spring-retry).|
+|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+To automate retry operations Spring Batch has the `RetryOperations` strategy. The
+following interface definition for `RetryOperations`:
+
+```
+public interface RetryOperations {
+
+ T execute(RetryCallback retryCallback) throws E;
+
+ T execute(RetryCallback retryCallback, RecoveryCallback recoveryCallback)
+ throws E;
+
+ T execute(RetryCallback retryCallback, RetryState retryState)
+ throws E, ExhaustedRetryException;
+
+ T execute(RetryCallback retryCallback, RecoveryCallback recoveryCallback,
+ RetryState retryState) throws E;
+
+}
+```
+
+The basic callback is a simple interface that lets you insert some business logic to be
+retried, as shown in the following interface definition:
+
+```
+public interface RetryCallback {
+
+ T doWithRetry(RetryContext context) throws E;
+
+}
+```
+
+The callback runs and, if it fails (by throwing an `Exception`), it is retried until
+either it is successful or the implementation aborts. There are a number of overloaded`execute` methods in the `RetryOperations` interface. Those methods deal with various use
+cases for recovery when all retry attempts are exhausted and deal with retry state, which
+lets clients and implementations store information between calls (we cover this in more
+detail later in the chapter).
+
+The simplest general purpose implementation of `RetryOperations` is `RetryTemplate`. It
+can be used as follows:
+
+```
+RetryTemplate template = new RetryTemplate();
+
+TimeoutRetryPolicy policy = new TimeoutRetryPolicy();
+policy.setTimeout(30000L);
+
+template.setRetryPolicy(policy);
+
+Foo result = template.execute(new RetryCallback() {
+
+ public Foo doWithRetry(RetryContext context) {
+ // Do stuff that might fail, e.g. webservice operation
+ return result;
+ }
+
+});
+```
+
+In the preceding example, we make a web service call and return the result to the user. If
+that call fails, then it is retried until a timeout is reached.
+
+#### `RetryContext`
+
+The method parameter for the `RetryCallback` is a `RetryContext`. Many callbacks ignore
+the context, but, if necessary, it can be used as an attribute bag to store data for the
+duration of the iteration.
+
+A `RetryContext` has a parent context if there is a nested retry in progress in the same
+thread. The parent context is occasionally useful for storing data that need to be shared
+between calls to `execute`.
+
+#### `RecoveryCallback`
+
+When a retry is exhausted, the `RetryOperations` can pass control to a different callback,
+called the `RecoveryCallback`. To use this feature, clients pass in the callbacks together
+to the same method, as shown in the following example:
+
+```
+Foo foo = template.execute(new RetryCallback() {
+ public Foo doWithRetry(RetryContext context) {
+ // business logic here
+ },
+ new RecoveryCallback() {
+ Foo recover(RetryContext context) throws Exception {
+ // recover logic here
+ }
+});
+```
+
+If the business logic does not succeed before the template decides to abort, then the
+client is given the chance to do some alternate processing through the recovery callback.
+
+#### Stateless Retry
+
+In the simplest case, a retry is just a while loop. The `RetryTemplate` can just keep
+trying until it either succeeds or fails. The `RetryContext` contains some state to
+determine whether to retry or abort, but this state is on the stack and there is no need
+to store it anywhere globally, so we call this stateless retry. The distinction between
+stateless and stateful retry is contained in the implementation of the `RetryPolicy` (the`RetryTemplate` can handle both). In a stateless retry, the retry callback is always
+executed in the same thread it was on when it failed.
+
+#### Stateful Retry
+
+Where the failure has caused a transactional resource to become invalid, there are some
+special considerations. This does not apply to a simple remote call because there is no
+transactional resource (usually), but it does sometimes apply to a database update,
+especially when using Hibernate. In this case it only makes sense to re-throw the
+exception that called the failure immediately, so that the transaction can roll back and
+we can start a new, valid transaction.
+
+In cases involving transactions, a stateless retry is not good enough, because the
+re-throw and roll back necessarily involve leaving the `RetryOperations.execute()` method
+and potentially losing the context that was on the stack. To avoid losing it we have to
+introduce a storage strategy to lift it off the stack and put it (at a minimum) in heap
+storage. For this purpose, Spring Batch provides a storage strategy called`RetryContextCache`, which can be injected into the `RetryTemplate`. The default
+implementation of the `RetryContextCache` is in memory, using a simple `Map`. Advanced
+usage with multiple processes in a clustered environment might also consider implementing
+the `RetryContextCache` with a cluster cache of some sort (however, even in a clustered
+environment, this might be overkill).
+
+Part of the responsibility of the `RetryOperations` is to recognize the failed operations
+when they come back in a new execution (and usually wrapped in a new transaction). To
+facilitate this, Spring Batch provides the `RetryState` abstraction. This works in
+conjunction with a special `execute` methods in the `RetryOperations` interface.
+
+The way the failed operations are recognized is by identifying the state across multiple
+invocations of the retry. To identify the state, the user can provide a `RetryState`object that is responsible for returning a unique key identifying the item. The identifier
+is used as a key in the `RetryContextCache` interface.
+
+| |Be very careful with the implementation of `Object.equals()` and `Object.hashCode()` in the key returned by `RetryState`. The best advice is to use a business key to identify the items. In the case of a JMS message, the message ID can be used.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+When the retry is exhausted, there is also the option to handle the failed item in a
+different way, instead of calling the `RetryCallback` (which is now presumed to be likely
+to fail). Just like in the stateless case, this option is provided by the`RecoveryCallback`, which can be provided by passing it in to the `execute` method of`RetryOperations`.
+
+The decision to retry or not is actually delegated to a regular `RetryPolicy`, so the
+usual concerns about limits and timeouts can be injected there (described later in this
+chapter).
+
+### Retry Policies
+
+Inside a `RetryTemplate`, the decision to retry or fail in the `execute` method is
+determined by a `RetryPolicy`, which is also a factory for the `RetryContext`. The`RetryTemplate` has the responsibility to use the current policy to create a`RetryContext` and pass that in to the `RetryCallback` at every attempt. After a callback
+fails, the `RetryTemplate` has to make a call to the `RetryPolicy` to ask it to update its
+state (which is stored in the `RetryContext`) and then asks the policy if another attempt
+can be made. If another attempt cannot be made (such as when a limit is reached or a
+timeout is detected) then the policy is also responsible for handling the exhausted state.
+Simple implementations throw `RetryExhaustedException`, which causes any enclosing
+transaction to be rolled back. More sophisticated implementations might attempt to take
+some recovery action, in which case the transaction can remain intact.
+
+| |Failures are inherently either retryable or not. If the same exception is always going to be thrown from the business logic, it does no good to retry it. So do not retry on all exception types. Rather, try to focus on only those exceptions that you expect to be retryable. It is not usually harmful to the business logic to retry more aggressively, but it is wasteful, because, if a failure is deterministic, you spend time retrying something that you know in advance is fatal.|
+|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+Spring Batch provides some simple general purpose implementations of stateless`RetryPolicy`, such as `SimpleRetryPolicy` and `TimeoutRetryPolicy` (used in the preceding example).
+
+The `SimpleRetryPolicy` allows a retry on any of a named list of exception types, up to a
+fixed number of times. It also has a list of "fatal" exceptions that should never be
+retried, and this list overrides the retryable list so that it can be used to give finer
+control over the retry behavior, as shown in the following example:
+
+```
+SimpleRetryPolicy policy = new SimpleRetryPolicy();
+// Set the max retry attempts
+policy.setMaxAttempts(5);
+// Retry on all exceptions (this is the default)
+policy.setRetryableExceptions(new Class[] {Exception.class});
+// ... but never retry IllegalStateException
+policy.setFatalExceptions(new Class[] {IllegalStateException.class});
+
+// Use the policy...
+RetryTemplate template = new RetryTemplate();
+template.setRetryPolicy(policy);
+template.execute(new RetryCallback() {
+ public Foo doWithRetry(RetryContext context) {
+ // business logic here
+ }
+});
+```
+
+There is also a more flexible implementation called `ExceptionClassifierRetryPolicy`,
+which lets the user configure different retry behavior for an arbitrary set of exception
+types though the `ExceptionClassifier` abstraction. The policy works by calling on the
+classifier to convert an exception into a delegate `RetryPolicy`. For example, one
+exception type can be retried more times before failure than another by mapping it to a
+different policy.
+
+Users might need to implement their own retry policies for more customized decisions. For
+instance, a custom retry policy makes sense when there is a well-known, solution-specific
+classification of exceptions into retryable and not retryable.
+
+### Backoff Policies
+
+When retrying after a transient failure, it often helps to wait a bit before trying again,
+because usually the failure is caused by some problem that can only be resolved by
+waiting. If a `RetryCallback` fails, the `RetryTemplate` can pause execution according to
+the `BackoffPolicy`.
+
+The following code shows the interface definition for the `BackOffPolicy` interface:
+
+```
+public interface BackoffPolicy {
+
+ BackOffContext start(RetryContext context);
+
+ void backOff(BackOffContext backOffContext)
+ throws BackOffInterruptedException;
+
+}
+```
+
+A `BackoffPolicy` is free to implement the backOff in any way it chooses. The policies
+provided by Spring Batch out of the box all use `Object.wait()`. A common use case is to
+backoff with an exponentially increasing wait period, to avoid two retries getting into
+lock step and both failing (this is a lesson learned from ethernet). For this purpose,
+Spring Batch provides the `ExponentialBackoffPolicy`.
+
+### Listeners
+
+Often, it is useful to be able to receive additional callbacks for cross cutting concerns
+across a number of different retries. For this purpose, Spring Batch provides the`RetryListener` interface. The `RetryTemplate` lets users register `RetryListeners`, and
+they are given callbacks with `RetryContext` and `Throwable` where available during the
+iteration.
+
+The following code shows the interface definition for `RetryListener`:
+
+```
+public interface RetryListener {
+
+ boolean open(RetryContext context, RetryCallback callback);
+
+ void onError(RetryContext context, RetryCallback callback, Throwable throwable);
+
+ void close(RetryContext context, RetryCallback callback, Throwable throwable);
+}
+```
+
+The `open` and `close` callbacks come before and after the entire retry in the simplest
+case, and `onError` applies to the individual `RetryCallback` calls. The `close` method
+might also receive a `Throwable`. If there has been an error, it is the last one thrown by
+the `RetryCallback`.
+
+Note that, when there is more than one listener, they are in a list, so there is an order.
+In this case, `open` is called in the same order while `onError` and `close` are called in
+reverse order.
+
+### Declarative Retry
+
+Sometimes, there is some business processing that you know you want to retry every time it
+happens. The classic example of this is the remote service call. Spring Batch provides an
+AOP interceptor that wraps a method call in a `RetryOperations` implementation for just
+this purpose. The `RetryOperationsInterceptor` executes the intercepted method and retries
+on failure according to the `RetryPolicy` in the provided `RepeatTemplate`.
+
+The following example shows a declarative retry that uses the Spring AOP namespace to
+retry a service call to a method called `remoteCall` (for more detail on how to configure
+AOP interceptors, see the Spring User Guide):
+
+```
+
+
+
+
+
+
+```
+
+The following example shows a declarative retry that uses java configuration to retry a
+service call to a method called `remoteCall` (for more detail on how to configure AOP
+interceptors, see the Spring User Guide):
+
+```
+@Bean
+public MyService myService() {
+ ProxyFactory factory = new ProxyFactory(RepeatOperations.class.getClassLoader());
+ factory.setInterfaces(MyService.class);
+ factory.setTarget(new MyService());
+
+ MyService service = (MyService) factory.getProxy();
+ JdkRegexpMethodPointcut pointcut = new JdkRegexpMethodPointcut();
+ pointcut.setPatterns(".*remoteCall.*");
+
+ RetryOperationsInterceptor interceptor = new RetryOperationsInterceptor();
+
+ ((Advised) service).addAdvisor(new DefaultPointcutAdvisor(pointcut, interceptor));
+
+ return service;
+}
+```
+
+The preceding example uses a default `RetryTemplate` inside the interceptor. To change the
+policies or listeners, you can inject an instance of `RetryTemplate` into the interceptor.
\ No newline at end of file
diff --git a/docs/en/spring-batch/scalability.md b/docs/en/spring-batch/scalability.md
new file mode 100644
index 0000000000000000000000000000000000000000..df00f11321661d44b10400634759f2c8a347dbc5
--- /dev/null
+++ b/docs/en/spring-batch/scalability.md
@@ -0,0 +1,447 @@
+# Scaling and Parallel Processing
+
+## Scaling and Parallel Processing
+
+XMLJavaBoth
+
+Many batch processing problems can be solved with single threaded, single process jobs,
+so it is always a good idea to properly check if that meets your needs before thinking
+about more complex implementations. Measure the performance of a realistic job and see if
+the simplest implementation meets your needs first. You can read and write a file of
+several hundred megabytes in well under a minute, even with standard hardware.
+
+When you are ready to start implementing a job with some parallel processing, Spring
+Batch offers a range of options, which are described in this chapter, although some
+features are covered elsewhere. At a high level, there are two modes of parallel
+processing:
+
+* Single process, multi-threaded
+
+* Multi-process
+
+These break down into categories as well, as follows:
+
+* Multi-threaded Step (single process)
+
+* Parallel Steps (single process)
+
+* Remote Chunking of Step (multi process)
+
+* Partitioning a Step (single or multi process)
+
+First, we review the single-process options. Then we review the multi-process options.
+
+### Multi-threaded Step
+
+The simplest way to start parallel processing is to add a `TaskExecutor` to your Step
+configuration.
+
+For example, you might add an attribute of the `tasklet`, as follows:
+
+```
+
+ ...
+
+```
+
+When using java configuration, a `TaskExecutor` can be added to the step,
+as shown in the following example:
+
+Java Configuration
+
+```
+@Bean
+public TaskExecutor taskExecutor() {
+ return new SimpleAsyncTaskExecutor("spring_batch");
+}
+
+@Bean
+public Step sampleStep(TaskExecutor taskExecutor) {
+ return this.stepBuilderFactory.get("sampleStep")
+ .chunk(10)
+ .reader(itemReader())
+ .writer(itemWriter())
+ .taskExecutor(taskExecutor)
+ .build();
+}
+```
+
+In this example, the `taskExecutor` is a reference to another bean definition that
+implements the `TaskExecutor` interface.[`TaskExecutor`](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/core/task/TaskExecutor.html)is a standard Spring interface, so consult the Spring User Guide for details of available
+implementations. The simplest multi-threaded `TaskExecutor` is a`SimpleAsyncTaskExecutor`.
+
+The result of the above configuration is that the `Step` executes by reading, processing,
+and writing each chunk of items (each commit interval) in a separate thread of execution.
+Note that this means there is no fixed order for the items to be processed, and a chunk
+might contain items that are non-consecutive compared to the single-threaded case. In
+addition to any limits placed by the task executor (such as whether it is backed by a
+thread pool), there is a throttle limit in the tasklet configuration which defaults to 4.
+You may need to increase this to ensure that a thread pool is fully utilized.
+
+For example you might increase the throttle-limit, as shown in the following example:
+
+```
+ ...
+
+```
+
+When using Java configuration, the builders provide access to the throttle limit, as shown
+in the following example:
+
+Java Configuration
+
+```
+@Bean
+public Step sampleStep(TaskExecutor taskExecutor) {
+ return this.stepBuilderFactory.get("sampleStep")
+ .chunk(10)
+ .reader(itemReader())
+ .writer(itemWriter())
+ .taskExecutor(taskExecutor)
+ .throttleLimit(20)
+ .build();
+}
+```
+
+Note also that there may be limits placed on concurrency by any pooled resources used in
+your step, such as a `DataSource`. Be sure to make the pool in those resources at least
+as large as the desired number of concurrent threads in the step.
+
+There are some practical limitations of using multi-threaded `Step` implementations for
+some common batch use cases. Many participants in a `Step` (such as readers and writers)
+are stateful. If the state is not segregated by thread, then those components are not
+usable in a multi-threaded `Step`. In particular, most of the off-the-shelf readers and
+writers from Spring Batch are not designed for multi-threaded use. It is, however,
+possible to work with stateless or thread safe readers and writers, and there is a sample
+(called `parallelJob`) in the[Spring
+Batch Samples](https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples) that shows the use of a process indicator (see[Preventing State Persistence](readersAndWriters.html#process-indicator)) to keep track
+of items that have been processed in a database input table.
+
+Spring Batch provides some implementations of `ItemWriter` and `ItemReader`. Usually,
+they say in the Javadoc if they are thread safe or not or what you have to do to avoid
+problems in a concurrent environment. If there is no information in the Javadoc, you can
+check the implementation to see if there is any state. If a reader is not thread safe,
+you can decorate it with the provided `SynchronizedItemStreamReader` or use it in your own
+synchronizing delegator. You can synchronize the call to `read()` and as long as the
+processing and writing is the most expensive part of the chunk, your step may still
+complete much faster than it would in a single threaded configuration.
+
+### Parallel Steps
+
+As long as the application logic that needs to be parallelized can be split into distinct
+responsibilities and assigned to individual steps, then it can be parallelized in a
+single process. Parallel Step execution is easy to configure and use.
+
+For example, executing steps `(step1,step2)` in parallel with `step3` is straightforward,
+as shown in the following example:
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+When using Java configuration, executing steps `(step1,step2)` in parallel with `step3`is straightforward, as shown in the following example:
+
+Java Configuration
+
+```
+@Bean
+public Job job() {
+ return jobBuilderFactory.get("job")
+ .start(splitFlow())
+ .next(step4())
+ .build() //builds FlowJobBuilder instance
+ .build(); //builds Job instance
+}
+
+@Bean
+public Flow splitFlow() {
+ return new FlowBuilder("splitFlow")
+ .split(taskExecutor())
+ .add(flow1(), flow2())
+ .build();
+}
+
+@Bean
+public Flow flow1() {
+ return new FlowBuilder("flow1")
+ .start(step1())
+ .next(step2())
+ .build();
+}
+
+@Bean
+public Flow flow2() {
+ return new FlowBuilder("flow2")
+ .start(step3())
+ .build();
+}
+
+@Bean
+public TaskExecutor taskExecutor() {
+ return new SimpleAsyncTaskExecutor("spring_batch");
+}
+```
+
+The configurable task executor is used to specify which `TaskExecutor`implementation should be used to execute the individual flows. The default is`SyncTaskExecutor`, but an asynchronous `TaskExecutor` is required to run the steps in
+parallel. Note that the job ensures that every flow in the split completes before
+aggregating the exit statuses and transitioning.
+
+See the section on [Split Flows](step.html#split-flows) for more detail.
+
+### Remote Chunking
+
+In remote chunking, the `Step` processing is split across multiple processes,
+communicating with each other through some middleware. The following image shows the
+pattern:
+
+![Remote Chunking](https://docs.spring.io/spring-batch/docs/current/reference/html/images/remote-chunking.png)
+
+Figure 1. Remote Chunking
+
+The manager component is a single process, and the workers are multiple remote processes.
+This pattern works best if the manager is not a bottleneck, so the processing must be more
+expensive than the reading of items (as is often the case in practice).
+
+The manager is an implementation of a Spring Batch `Step` with the `ItemWriter` replaced
+by a generic version that knows how to send chunks of items to the middleware as
+messages. The workers are standard listeners for whatever middleware is being used (for
+example, with JMS, they would be `MessageListener` implementations), and their role is
+to process the chunks of items using a standard `ItemWriter` or `ItemProcessor` plus`ItemWriter`, through the `ChunkProcessor` interface. One of the advantages of using this
+pattern is that the reader, processor, and writer components are off-the-shelf (the same
+as would be used for a local execution of the step). The items are divided up dynamically
+and work is shared through the middleware, so that, if the listeners are all eager
+consumers, then load balancing is automatic.
+
+The middleware has to be durable, with guaranteed delivery and a single consumer for each
+message. JMS is the obvious candidate, but other options (such as JavaSpaces) exist in
+the grid computing and shared memory product space.
+
+See the section on[Spring Batch Integration - Remote Chunking](spring-batch-integration.html#remote-chunking)for more detail.
+
+### Partitioning
+
+Spring Batch also provides an SPI for partitioning a `Step` execution and executing it
+remotely. In this case, the remote participants are `Step` instances that could just as
+easily have been configured and used for local processing. The following image shows the
+pattern:
+
+![Partitioning Overview](https://docs.spring.io/spring-batch/docs/current/reference/html/images/partitioning-overview.png)
+
+Figure 2. Partitioning
+
+The `Job` runs on the left-hand side as a sequence of `Step` instances, and one of the`Step` instances is labeled as a manager. The workers in this picture are all identical
+instances of a `Step`, which could in fact take the place of the manager, resulting in the
+same outcome for the `Job`. The workers are typically going to be remote services but
+could also be local threads of execution. The messages sent by the manager to the workers
+in this pattern do not need to be durable or have guaranteed delivery. Spring Batch
+metadata in the `JobRepository` ensures that each worker is executed once and only once for
+each `Job` execution.
+
+The SPI in Spring Batch consists of a special implementation of `Step` (called the`PartitionStep`) and two strategy interfaces that need to be implemented for the specific
+environment. The strategy interfaces are `PartitionHandler` and `StepExecutionSplitter`,
+and their role is shown in the following sequence diagram:
+
+![Partitioning SPI](https://docs.spring.io/spring-batch/docs/current/reference/html/images/partitioning-spi.png)
+
+Figure 3. Partitioning SPI
+
+The `Step` on the right in this case is the “remote” worker, so, potentially, there are
+many objects and or processes playing this role, and the `PartitionStep` is shown driving
+the execution.
+
+The following example shows the `PartitionStep` configuration when using XML
+configuration:
+
+```
+
+