提交 fde704b9 编写于 作者: 茶陵後's avatar 茶陵後 👍

Merge branch 'structrue'

因为 它太大了无法显示 source diff 。你可以改为 查看blob
## [](#listOfReadersAndWriters)Appendix A: List of ItemReaders and ItemWriters
## Appendix A: List of ItemReaders and ItemWriters
### [](#itemReadersAppendix)Item Readers
### Item Readers
| Item Reader | Description |
|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
......@@ -24,7 +24,7 @@
| StaxEventItemReader | Reads via StAX. see [`StaxEventItemReader`](readersAndWriters.html#StaxEventItemReader). |
| JsonItemReader | Reads items from a Json document. see [`JsonItemReader`](readersAndWriters.html#JsonItemReader). |
### [](#itemWritersAppendix)Item Writers
### Item Writers
| Item Writer | Description |
|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
......
# Common Batch Patterns
## [](#commonPatterns)Common Batch Patterns
## Common Batch Patterns
XMLJavaBoth
......@@ -15,7 +15,7 @@ to implement an `ItemWriter` or `ItemProcessor`.
In this chapter, we provide a few examples of common patterns in custom business logic.
These examples primarily feature the listener interfaces. It should be noted that an`ItemReader` or `ItemWriter` can implement a listener interface as well, if appropriate.
### [](#loggingItemProcessingAndFailures)Logging Item Processing and Failures
### Logging Item Processing and Failures
A common use case is the need for special handling of errors in a step, item by item,
perhaps logging to a special channel or inserting a record into a database. A
......@@ -73,7 +73,7 @@ public Step simpleStep() {
| |if your listener does anything in an `onError()` method, it must be inside<br/>a transaction that is going to be rolled back. If you need to use a transactional<br/>resource, such as a database, inside an `onError()` method, consider adding a declarative<br/>transaction to that method (see Spring Core Reference Guide for details), and giving its<br/>propagation attribute a value of `REQUIRES_NEW`.|
|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
### [](#stoppingAJobManuallyForBusinessReasons)Stopping a Job Manually for Business Reasons
### Stopping a Job Manually for Business Reasons
Spring Batch provides a `stop()` method through the `JobOperator` interface, but this is
really for use by the operator rather than the application programmer. Sometimes, it is
......@@ -178,7 +178,7 @@ public class CustomItemWriter extends ItemListenerSupport implements StepListene
When the flag is set, the default behavior is for the step to throw a`JobInterruptedException`. This behavior can be controlled through the`StepInterruptionPolicy`. However, the only choice is to throw or not throw an exception,
so this is always an abnormal ending to a job.
### [](#addingAFooterRecord)Adding a Footer Record
### Adding a Footer Record
Often, when writing to flat files, a “footer” record must be appended to the end of the
file, after all processing has be completed. This can be achieved using the`FlatFileFooterCallback` interface provided by Spring Batch. The `FlatFileFooterCallback`(and its counterpart, the `FlatFileHeaderCallback`) are optional properties of the`FlatFileItemWriter` and can be added to an item writer.
......@@ -224,7 +224,7 @@ public interface FlatFileFooterCallback {
}
```
#### [](#writingASummaryFooter)Writing a Summary Footer
#### Writing a Summary Footer
A common requirement involving footer records is to aggregate information during the
output process and to append this information to the end of the file. This footer often
......@@ -331,7 +331,7 @@ retrieves any existing `totalAmount` from the `ExecutionContext` and uses it as
starting point for processing, allowing the `TradeItemWriter` to pick up on restart where
it left off the previous time the `Step` was run.
### [](#drivingQueryBasedItemReaders)Driving Query Based ItemReaders
### Driving Query Based ItemReaders
In the [chapter on readers and writers](readersAndWriters.html), database input using
paging was discussed. Many database vendors, such as DB2, have extremely pessimistic
......@@ -360,7 +360,7 @@ An `ItemProcessor` should be used to transform the key obtained from the driving
into a full `Foo` object. An existing DAO can be used to query for the full object based
on the key.
### [](#multiLineRecords)Multi-Line Records
### Multi-Line Records
While it is usually the case with flat files that each record is confined to a single
line, it is common that a file might have records spanning multiple lines with multiple
......@@ -515,7 +515,7 @@ public Trade read() throws Exception {
}
```
### [](#executingSystemCommands)Executing System Commands
### Executing System Commands
Many batch jobs require that an external command be called from within the batch job.
Such a process could be kicked off separately by the scheduler, but the advantage of
......@@ -553,7 +553,7 @@ public SystemCommandTasklet tasklet() {
}
```
### [](#handlingStepCompletionWhenNoInputIsFound)Handling Step Completion When No Input is Found
### Handling Step Completion When No Input is Found
In many batch scenarios, finding no rows in a database or file to process is not
exceptional. The `Step` is simply considered to have found no work and completes with 0
......@@ -584,7 +584,7 @@ The preceding `StepExecutionListener` inspects the `readCount` property of the`S
is the case, an exit code `FAILED` is returned, indicating that the `Step` should fail.
Otherwise, `null` is returned, which does not affect the status of the `Step`.
### [](#passingDataToFutureSteps)Passing Data to Future Steps
### Passing Data to Future Steps
It is often useful to pass information from one step to another. This can be done through
the `ExecutionContext`. The catch is that there are two `ExecutionContexts`: one at the`Step` level and one at the `Job` level. The `Step` `ExecutionContext` remains only as
......
# The Domain Language of Batch
## [](#domainLanguageOfBatch)The Domain Language of Batch
## The Domain Language of Batch
XMLJavaBoth
......@@ -38,7 +38,7 @@ The preceding diagram highlights the key concepts that make up the domain langua
Spring Batch. A Job has one to many steps, each of which has exactly one `ItemReader`,
one `ItemProcessor`, and one `ItemWriter`. A job needs to be launched (with`JobLauncher`), and metadata about the currently running process needs to be stored (in`JobRepository`).
### [](#job)Job
### Job
This section describes stereotypes relating to the concept of a batch job. A `Job` is an
entity that encapsulates an entire batch process. As is common with other Spring
......@@ -89,7 +89,7 @@ following example:
</job>
```
#### [](#jobinstance)JobInstance
#### JobInstance
A `JobInstance` refers to the concept of a logical job run. Consider a batch job that
should be run once at the end of the day, such as the 'EndOfDay' `Job` from the preceding
......@@ -114,7 +114,7 @@ from previous executions is used. Using a new `JobInstance` means 'start from th
beginning', and using an existing instance generally means 'start from where you left
off'.
#### [](#jobparameters)JobParameters
#### JobParameters
Having discussed `JobInstance` and how it differs from Job, the natural question to ask
is: "How is one `JobInstance` distinguished from another?" The answer is:`JobParameters`. A `JobParameters` object holds a set of parameters used to start a batch
......@@ -133,7 +133,7 @@ a parameter of 01-02-2017. Thus, the contract can be defined as: `JobInstance` =
| |Not all job parameters are required to contribute to the identification of a`JobInstance`. By default, they do so. However, the framework also allows the submission<br/>of a `Job` with parameters that do not contribute to the identity of a `JobInstance`.|
|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#jobexecution)JobExecution
#### JobExecution
A `JobExecution` refers to the technical concept of a single attempt to run a Job. An
execution may end in failure or success, but the `JobInstance` corresponding to a given
......@@ -213,7 +213,7 @@ in both the `JobInstance` and `JobParameters` tables and two extra entries in th
| |Column names may have been abbreviated or removed for the sake of clarity and<br/>formatting.|
|---|---------------------------------------------------------------------------------------------|
### [](#step)Step
### Step
A `Step` is a domain object that encapsulates an independent, sequential phase of a batch
job. Therefore, every Job is composed entirely of one or more steps. A `Step` contains
......@@ -228,7 +228,7 @@ with a `Job`, a `Step` has an individual `StepExecution` that correlates with a
Figure 4. Job Hierarchy With Steps
#### [](#stepexecution)StepExecution
#### StepExecution
A `StepExecution` represents a single attempt to execute a `Step`. A new `StepExecution`is created each time a `Step` is run, similar to `JobExecution`. However, if a step fails
to execute because the step before it fails, no execution is persisted for it. A`StepExecution` is created only when its `Step` is actually started.
......@@ -256,7 +256,7 @@ restart. The following table lists the properties for `StepExecution`:
| filterCount | The number of items that have been 'filtered' by the `ItemProcessor`. |
| writeSkipCount | The number of times `write` has failed, resulting in a skipped item. |
### [](#executioncontext)ExecutionContext
### ExecutionContext
An `ExecutionContext` represents a collection of key/value pairs that are persisted and
controlled by the framework in order to allow developers a place to store persistent
......@@ -351,7 +351,7 @@ ExecutionContext ecJob = jobExecution.getExecutionContext();
As noted in the comment, `ecStep` does not equal `ecJob`. They are two different`ExecutionContexts`. The one scoped to the `Step` is saved at every commit point in the`Step`, whereas the one scoped to the Job is saved in between every `Step` execution.
### [](#jobrepository)JobRepository
### JobRepository
`JobRepository` is the persistence mechanism for all of the Stereotypes mentioned above.
It provides CRUD operations for `JobLauncher`, `Job`, and `Step` implementations. When a`Job` is first launched, a `JobExecution` is obtained from the repository, and, during
......@@ -367,7 +367,7 @@ with the `<job-repository>` tag, as shown in the following example:
When using Java configuration, the `@EnableBatchProcessing` annotation provides a`JobRepository` as one of the components automatically configured out of the box.
### [](#joblauncher)JobLauncher
### JobLauncher
`JobLauncher` represents a simple interface for launching a `Job` with a given set of`JobParameters`, as shown in the following example:
......@@ -382,27 +382,27 @@ public JobExecution run(Job job, JobParameters jobParameters)
It is expected that implementations obtain a valid `JobExecution` from the`JobRepository` and execute the `Job`.
### [](#item-reader)Item Reader
### Item Reader
`ItemReader` is an abstraction that represents the retrieval of input for a `Step`, one
item at a time. When the `ItemReader` has exhausted the items it can provide, it
indicates this by returning `null`. More details about the `ItemReader` interface and its
various implementations can be found in[Readers And Writers](readersAndWriters.html#readersAndWriters).
### [](#item-writer)Item Writer
### Item Writer
`ItemWriter` is an abstraction that represents the output of a `Step`, one batch or chunk
of items at a time. Generally, an `ItemWriter` has no knowledge of the input it should
receive next and knows only the item that was passed in its current invocation. More
details about the `ItemWriter` interface and its various implementations can be found in[Readers And Writers](readersAndWriters.html#readersAndWriters).
### [](#item-processor)Item Processor
### Item Processor
`ItemProcessor` is an abstraction that represents the business processing of an item.
While the `ItemReader` reads one item, and the `ItemWriter` writes them, the`ItemProcessor` provides an access point to transform or apply other business processing.
If, while processing the item, it is determined that the item is not valid, returning`null` indicates that the item should not be written out. More details about the`ItemProcessor` interface can be found in[Readers And Writers](readersAndWriters.html#readersAndWriters).
### [](#batch-namespace)Batch Namespace
### Batch Namespace
Many of the domain concepts listed previously need to be configured in a Spring`ApplicationContext`. While there are implementations of the interfaces above that can be
used in a standard bean definition, a namespace has been provided for ease of
......
# Glossary
## [](#glossary)Appendix A: Glossary
## Appendix A: Glossary
### [](#spring-batch-glossary)Spring Batch Glossary
### Spring Batch Glossary
Batch
......
# Configuring and Running a Job
## [](#configureJob)Configuring and Running a Job
## Configuring and Running a Job
XMLJavaBoth
......@@ -19,7 +19,7 @@ how a `Job` will be run and how its meta-data will be
stored during that run. This chapter will explain the various configuration
options and runtime concerns of a `Job`.
### [](#configuringAJob)Configuring a Job
### Configuring a Job
There are multiple implementations of the [`Job`](#configureJob) interface. However,
builders abstract away the difference in configuration.
......@@ -70,7 +70,7 @@ In addition to steps a job configuration can contain other elements that help wi
parallelization (`<split>`), declarative flow control (`<decision>`) and externalization
of flow definitions (`<flow/>`).
#### [](#restartability)Restartability
#### Restartability
One key issue when executing a batch job concerns the behavior of a `Job` when it is
restarted. The launching of a `Job` is considered to be a 'restart' if a `JobExecution`already exists for the particular `JobInstance`. Ideally, all jobs should be able to start
......@@ -130,7 +130,7 @@ This snippet of JUnit code shows how attempting to create a`JobExecution` the fi
job will cause no issues. However, the second
attempt will throw a `JobRestartException`.
#### [](#interceptingJobExecution)Intercepting Job Execution
#### Intercepting Job Execution
During the course of the execution of a
Job, it may be useful to be notified of various
......@@ -198,7 +198,7 @@ The annotations corresponding to this interface are:
* `@AfterJob`
#### [](#inheritingFromAParentJob)Inheriting from a Parent Job
#### Inheriting from a Parent Job
If a group of Jobs share similar, but not
identical, configurations, then it may be helpful to define a "parent"`Job` from which the concrete
......@@ -229,7 +229,7 @@ it with its own list of listeners to produce a`Job` with two listeners and one`S
Please see the section on [Inheriting from a Parent Step](step.html#inheritingFromParentStep)for more detailed information.
#### [](#jobparametersvalidator)JobParametersValidator
#### JobParametersValidator
A job declared in the XML namespace or using any subclass of`AbstractJob` can optionally declare a validator for the job parameters at
runtime. This is useful when for instance you need to assert that a job
......@@ -263,7 +263,7 @@ public Job job1() {
}
```
### [](#javaConfig)Java Config
### Java Config
Spring 3 brought the ability to configure applications via java instead of XML. As of
Spring Batch 2.2.0, batch jobs can be configured using the same java config.
......@@ -351,7 +351,7 @@ public class AppConfig {
}
```
### [](#configuringJobRepository)Configuring a JobRepository
### Configuring a JobRepository
When using `@EnableBatchProcessing`, a `JobRepository` is provided out of the box for you.
This section addresses configuring your own.
......@@ -407,7 +407,7 @@ will be used. They are shown above for awareness purposes. The
max varchar length defaults to 2500, which is the
length of the long `VARCHAR` columns in the[sample schema scripts](schema-appendix.html#metaDataSchemaOverview)
#### [](#txConfigForJobRepository)Transaction Configuration for the JobRepository
#### Transaction Configuration for the JobRepository
If the namespace or the provided `FactoryBean` is used, transactional advice is
automatically created around the repository. This is to ensure that the batch meta-data,
......@@ -490,7 +490,7 @@ public TransactionProxyFactoryBean baseProxy() {
}
```
#### [](#repositoryTablePrefix)Changing the Table Prefix
#### Changing the Table Prefix
Another modifiable property of the `JobRepository` is the table prefix of the meta-data
tables. By default they are all prefaced with `BATCH_`. `BATCH_JOB_EXECUTION` and`BATCH_STEP_EXECUTION` are two examples. However, there are potential reasons to modify this
......@@ -528,7 +528,7 @@ Given the preceding changes, every query to the meta-data tables is prefixed wit
| |Only the table prefix is configurable. The table and column names are not.|
|---|--------------------------------------------------------------------------|
#### [](#inMemoryRepository)In-Memory Repository
#### In-Memory Repository
There are scenarios in which you may not want to persist your domain objects to the
database. One reason may be speed; storing domain objects at each commit point takes extra
......@@ -574,7 +574,7 @@ transactional (such as RDBMS access). For testing purposes many people find the`
| |The `MapJobRepositoryFactoryBean` and related classes have been deprecated in v4 and are scheduled<br/>for removal in v5. If you want to use an in-memory job repository, you can use an embedded database<br/>like H2, Apache Derby or HSQLDB. There are several ways to create an embedded database and use it in<br/>your Spring Batch application. One way to do that is by using the APIs from [Spring JDBC](https://docs.spring.io/spring-framework/docs/current/reference/html/data-access.html#jdbc-embedded-database-support):<br/><br/>```<br/>@Bean<br/>public DataSource dataSource() {<br/> return new EmbeddedDatabaseBuilder()<br/> .setType(EmbeddedDatabaseType.H2)<br/> .addScript("/org/springframework/batch/core/schema-drop-h2.sql")<br/> .addScript("/org/springframework/batch/core/schema-h2.sql")<br/> .build();<br/>}<br/>```<br/><br/>Once you have defined your embedded datasource as a bean in your application context, it should be picked<br/>up automatically if you use `@EnableBatchProcessing`. Otherwise you can configure it manually using the<br/>JDBC based `JobRepositoryFactoryBean` as shown in the [Configuring a JobRepository section](#configuringJobRepository).|
|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#nonStandardDatabaseTypesInRepository)Non-standard Database Types in a Repository
#### Non-standard Database Types in a Repository
If you are using a database platform that is not in the list of supported platforms, you
may be able to use one of the supported types, if the SQL variant is close enough. To do
......@@ -620,7 +620,7 @@ If even that doesn’t work, or you are not using an RDBMS, then the
only option may be to implement the various `Dao`interfaces that the `SimpleJobRepository` depends
on and wire one up manually in the normal Spring way.
### [](#configuringJobLauncher)Configuring a JobLauncher
### Configuring a JobLauncher
When using `@EnableBatchProcessing`, a `JobRegistry` is provided out of the box for you.
This section addresses configuring your own.
......@@ -709,7 +709,7 @@ public JobLauncher jobLauncher() {
Any implementation of the spring `TaskExecutor`interface can be used to control how jobs are asynchronously
executed.
### [](#runningAJob)Running a Job
### Running a Job
At a minimum, launching a batch job requires two things: the`Job` to be launched and a`JobLauncher`. Both can be contained within the same
context or different contexts. For example, if launching a job from the
......@@ -718,7 +718,7 @@ job will have its own `JobLauncher`. However, if
running from within a web container within the scope of an`HttpRequest`, there will usually be one`JobLauncher`, configured for asynchronous job
launching, that multiple requests will invoke to launch their jobs.
#### [](#runningJobsFromCommandLine)Running Jobs from the Command Line
#### Running Jobs from the Command Line
For users that want to run their jobs from an enterprise
scheduler, the command line is the primary interface. This is because
......@@ -729,7 +729,7 @@ to launch a Java process besides a shell script, such as Perl, Ruby, or
even 'build tools' such as ant or maven. However, because most people
are familiar with shell scripts, this example will focus on them.
##### [](#commandLineJobRunner)The CommandLineJobRunner
##### The CommandLineJobRunner
Because the script launching the job must kick off a Java
Virtual Machine, there needs to be a class with a main method to act
......@@ -832,7 +832,7 @@ The preceding example is overly simplistic, since there are many more requiremen
run a batch job in Spring Batch in general, but it serves to show the two main
requirements of the `CommandLineJobRunner`: `Job` and `JobLauncher`.
##### [](#exitCodes)ExitCodes
##### ExitCodes
When launching a batch job from the command-line, an enterprise
scheduler is often used. Most schedulers are fairly dumb and work only
......@@ -876,7 +876,7 @@ that needs to be done to provide your own`ExitCodeMapper` is to declare the impl
as a root level bean and ensure that it is part of the`ApplicationContext` that is loaded by the
runner.
#### [](#runningJobsFromWebContainer)Running Jobs from within a Web Container
#### Running Jobs from within a Web Container
Historically, offline processing such as batch jobs have been
launched from the command-line, as described above. However, there are
......@@ -891,7 +891,7 @@ job asynchronously:
Figure 4. Asynchronous Job Launcher Sequence From Web Container
The controller in this case is a Spring MVC controller. More
information on Spring MVC can be found here: [](https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#mvc)[https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#mvc](https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#mvc).
information on Spring MVC can be found here: .
The controller launches a `Job` using a`JobLauncher` that has been configured to launch[asynchronously](#runningJobsFromWebContainer), which
immediately returns a `JobExecution`. The`Job` will likely still be running, however, this
nonblocking behaviour allows the controller to return immediately, which
......@@ -915,7 +915,7 @@ public class JobLauncherController {
}
```
### [](#advancedMetaData)Advanced Meta-Data Usage
### Advanced Meta-Data Usage
So far, both the `JobLauncher` and `JobRepository` interfaces have been
discussed. Together, they represent simple launching of a job, and basic
......@@ -940,7 +940,7 @@ The `JobExplorer` and`JobOperator` interfaces, which will be discussed
below, add additional functionality for querying and controlling the meta
data.
#### [](#queryingRepository)Querying the Repository
#### Querying the Repository
The most basic need before any advanced features is the ability to
query the repository for existing executions. This functionality is
......@@ -1022,7 +1022,7 @@ public JobExplorer getJobExplorer() throws Exception {
...
```
#### [](#jobregistry)JobRegistry
#### JobRegistry
A `JobRegistry` (and its parent interface `JobLocator`) is not mandatory, but it can be
useful if you want to keep track of which jobs are available in the context. It is also
......@@ -1059,7 +1059,7 @@ There are two ways to populate a `JobRegistry` automatically: using
a bean post processor and using a registrar lifecycle component. These
two mechanisms are described in the following sections.
##### [](#jobregistrybeanpostprocessor)JobRegistryBeanPostProcessor
##### JobRegistryBeanPostProcessor
This is a bean post-processor that can register all jobs as they are created.
......@@ -1093,7 +1093,7 @@ example has been given an id so that it can be included in child
contexts (e.g. as a parent bean definition) and cause all jobs created
there to also be registered automatically.
##### [](#automaticjobregistrar)`AutomaticJobRegistrar`
##### `AutomaticJobRegistrar`
This is a lifecycle component that creates child contexts and registers jobs from those
contexts as they are created. One advantage of doing this is that, while the job names in
......@@ -1162,7 +1162,7 @@ used as well). For instance this might be desirable if there are jobs
defined in the main parent context as well as in the child
locations.
#### [](#JobOperator)JobOperator
#### JobOperator
As previously discussed, the `JobRepository`provides CRUD operations on the meta-data, and the`JobExplorer` provides read-only operations on the
meta-data. However, those operations are most useful when used together
......@@ -1250,7 +1250,7 @@ The following example shows a typical bean definition for `SimpleJobOperator` in
| |If you set the table prefix on the job repository, don’t forget to set it on the job explorer as well.|
|---|------------------------------------------------------------------------------------------------------|
#### [](#JobParametersIncrementer)JobParametersIncrementer
#### JobParametersIncrementer
Most of the methods on `JobOperator` are
self-explanatory, and more detailed explanations can be found on the[javadoc of the interface](https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/core/launch/JobOperator.html). However, the`startNextInstance` method is worth noting. This
......@@ -1319,7 +1319,7 @@ public Job footballJob() {
}
```
#### [](#stoppingAJob)Stopping a Job
#### Stopping a Job
One of the most common use cases of`JobOperator` is gracefully stopping a
Job:
......@@ -1336,7 +1336,7 @@ business service. However, as soon as control is returned back to the
framework, it will set the status of the current`StepExecution` to`BatchStatus.STOPPED`, save it, then do the same
for the `JobExecution` before finishing.
#### [](#aborting-a-job)Aborting a Job
#### Aborting a Job
A job execution which is `FAILED` can be
restarted (if the `Job` is restartable). A job execution whose status is`ABANDONED` will not be restarted by the framework.
......
# JSR-352 Support
## [](#jsr-352)JSR-352 Support
## JSR-352 Support
XMLJavaBoth
As of Spring Batch 3.0 support for JSR-352 has been fully implemented. This section is not a replacement for
the spec itself and instead, intends to explain how the JSR-352 specific concepts apply to Spring Batch.
Additional information on JSR-352 can be found via the
JCP here: [](https://jcp.org/en/jsr/detail?id=352)[https://jcp.org/en/jsr/detail?id=352](https://jcp.org/en/jsr/detail?id=352)
JCP here:
### [](#jsrGeneralNotes)General Notes about Spring Batch and JSR-352
### General Notes about Spring Batch and JSR-352
Spring Batch and JSR-352 are structurally the same. They both have jobs that are made up of steps. They
both have readers, processors, writers, and listeners. However, their interactions are subtly different.
......@@ -24,9 +24,9 @@ artifacts (readers, writers, etc) will work within a job configured with JSR-352
important to note that batch artifacts that have been developed against the JSR-352 interfaces will not work
within a traditional Spring Batch job.
### [](#jsrSetup)Setup
### Setup
#### [](#jsrSetupContexts)Application Contexts
#### Application Contexts
All JSR-352 based jobs within Spring Batch consist of two application contexts. A parent context, that
contains beans related to the infrastructure of Spring Batch such as the `JobRepository`,`PlatformTransactionManager`, etc and a child context that consists of the configuration
......@@ -37,7 +37,7 @@ property.
| |The base context is not processed by the JSR-352 processors for things like property injection so<br/>no components requiring that additional processing should be configured there.|
|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#jsrSetupLaunching)Launching a JSR-352 based job
#### Launching a JSR-352 based job
JSR-352 requires a very simple path to executing a batch job. The following code is all that is needed to
execute your first batch job:
......@@ -67,7 +67,7 @@ first time `BatchRuntime.getJobOperator()` is called:
| |None of the above beans are optional for executing JSR-352 based jobs. All may be overridden to<br/>provide customized functionality as needed.|
|---|-----------------------------------------------------------------------------------------------------------------------------------------------|
### [](#dependencyInjection)Dependency Injection
### Dependency Injection
JSR-352 is based heavily on the Spring Batch programming model. As such, while not explicitly requiring a
formal dependency injection implementation, DI of some kind implied. Spring Batch supports all three
......@@ -157,9 +157,9 @@ referenced requires a no argument constructor which will be used to create the b
</job>
```
### [](#jsrJobProperties)Batch Properties
### Batch Properties
#### [](#jsrPropertySupport)Property Support
#### Property Support
JSR-352 allows for properties to be defined at the Job, Step and batch artifact level by way of
configuration in the JSL. Batch properties are configured at each level in the following way:
......@@ -173,7 +173,7 @@ configuration in the JSL. Batch properties are configured at each level in the f
`Properties` may be configured on any batch artifact.
#### [](#jsrBatchPropertyAnnotation)@BatchProperty annotation
#### @BatchProperty annotation
`Properties` are referenced in batch artifacts by annotating class fields with the`@BatchProperty` and `@Inject` annotations (both annotations
are required by the spec). As defined by JSR-352, fields for properties must be String typed. Any type
......@@ -194,7 +194,7 @@ public class MyItemReader extends AbstractItemReader {
The value of the field "propertyName1" will be "propertyValue1"
#### [](#jsrPropertySubstitution)Property Substitution
#### Property Substitution
Property substitution is provided by way of operators and simple conditional expressions. The general
usage is `#{operator['key']}`.
......@@ -220,7 +220,7 @@ example, the result will resolve to a value of the system property file.separato
expressions can be resolved, an empty String will be returned. Multiple conditions can be
used, which are separated by a ';'.
### [](#jsrProcessingModels)Processing Models
### Processing Models
JSR-352 provides the same two basic processing models that Spring Batch does:
......@@ -229,7 +229,7 @@ JSR-352 provides the same two basic processing models that Spring Batch does:
* Task based processing - Using a `javax.batch.api.Batchlet`implementation. This processing model is the same as the`org.springframework.batch.core.step.tasklet.Tasklet` based processing
currently available.
#### [](#item-based-processing)Item based processing
#### Item based processing
Item based processing in this context is a chunk size being set by the number of items read by an`ItemReader`. To configure a step this way, specify the`item-count` (which defaults to 10) and optionally configure the`checkpoint-policy` as item (this is the default).
......@@ -250,7 +250,7 @@ This sets a time limit for how long the number of items specified has to be proc
the timeout is reached, the chunk will complete with however many items have been read by
then regardless of what the `item-count` is configured to be.
#### [](#custom-checkpointing)Custom checkpointing
#### Custom checkpointing
JSR-352 calls the process around the commit interval within a step "checkpointing".
Item-based checkpointing is one approach as mentioned above. However, this is not robust
......@@ -273,7 +273,7 @@ implementation of `CheckpointAlgorithm`.
...
```
### [](#jsrRunningAJob)Running a job
### Running a job
The entrance to executing a JSR-352 based job is through the`javax.batch.operations.JobOperator`. Spring Batch provides its own implementation of
this interface (`org.springframework.batch.core.jsr.launch.JsrJobOperator`). This
......@@ -307,7 +307,7 @@ based `JobOperator#start(String jobXMLName, Properties jobParameters)`, the fram
will always create a new JobInstance (JSR-352 job parameters are non-identifying). In order to
restart a job, a call to`JobOperator#restart(long executionId, Properties restartParameters)` is required.
### [](#jsrContexts)Contexts
### Contexts
JSR-352 defines two context objects that are used to interact with the meta-data of a job or step from
within a batch artifact: `javax.batch.runtime.context.JobContext` and`javax.batch.runtime.context.StepContext`. Both of these are available in any step
......@@ -328,7 +328,7 @@ In Spring Batch, the `JobContext` and `StepContext` wrap their
corresponding execution objects (`JobExecution` and`StepExecution` respectively). Data stored through`StepContext#setPersistentUserData(Serializable data)` is stored in the
Spring Batch `StepExecution#executionContext`.
### [](#jsrStepFlow)Step Flow
### Step Flow
Within a JSR-352 based job, the flow of steps works similarly as it does within Spring Batch.
However, there are a few subtle differences:
......@@ -348,7 +348,7 @@ However, there are a few subtle differences:
sorted from most specific to least specific and evaluated in that order. JSR-352 jobs
evaluate transition elements in the order they are specified in the XML.
### [](#jsrScaling)Scaling a JSR-352 batch job
### Scaling a JSR-352 batch job
Traditional Spring Batch jobs have four ways of scaling (the last two capable of being executed across
multiple JVMs):
......@@ -367,7 +367,7 @@ JSR-352 provides two options for scaling batch jobs. Both options support only a
* Partitioning - Conceptually the same as Spring Batch however implemented slightly different.
#### [](#jsrPartitioning)Partitioning
#### Partitioning
Conceptually, partitioning in JSR-352 is the same as it is in Spring Batch. Meta-data is provided
to each worker to identify the input to be processed, with the workers reporting back to the manager the
......@@ -407,7 +407,7 @@ results upon completion. However, there are some important differences:
|`javax.batch.api.partition.PartitionAnalyzer` |End point that receives the information collected by the`PartitionCollector` as well as the resulting<br/>statuses from a completed partition.|
| `javax.batch.api.partition.PartitionReducer` | Provides the ability to provide compensating logic for a partitioned<br/>step. |
### [](#jsrTesting)Testing
### Testing
Since all JSR-352 based jobs are executed asynchronously, it can be difficult to determine when a job has
completed. To help with testing, Spring Batch provides the`org.springframework.batch.test.JsrTestUtils`. This utility class provides the
......
# Monitoring and metrics
## [](#monitoring-and-metrics)Monitoring and metrics
## Monitoring and metrics
Since version 4.2, Spring Batch provides support for batch monitoring and metrics
based on [Micrometer](https://micrometer.io/). This section describes
which metrics are provided out-of-the-box and how to contribute custom metrics.
### [](#built-in-metrics)Built-in metrics
### Built-in metrics
Metrics collection does not require any specific configuration. All metrics provided
by the framework are registered in[Micrometer’s global registry](https://micrometer.io/docs/concepts#_global_registry)under the `spring.batch` prefix. The following table explains all the metrics in details:
......@@ -23,7 +23,7 @@ by the framework are registered in[Micrometer’s global registry](https://micro
| |The `status` tag can be either `SUCCESS` or `FAILURE`.|
|---|------------------------------------------------------|
### [](#custom-metrics)Custom metrics
### Custom metrics
If you want to use your own metrics in your custom components, we recommend using
Micrometer APIs directly. The following is an example of how to time a `Tasklet`:
......@@ -59,7 +59,7 @@ public class MyTimedTasklet implements Tasklet {
}
```
### [](#disabling-metrics)Disabling metrics
### Disabling metrics
Metrics collection is a concern similar to logging. Disabling logs is typically
done by configuring the logging library and this is no different for metrics.
......
# Item processing
## [](#itemProcessor)Item processing
## Item processing
XMLJavaBoth
......@@ -112,7 +112,7 @@ public Step step1() {
A difference between `ItemProcessor` and `ItemReader` or `ItemWriter` is that an `ItemProcessor`is optional for a `Step`.
### [](#chainingItemProcessors)Chaining ItemProcessors
### Chaining ItemProcessors
Performing a single transformation is useful in many scenarios, but what if you want to
'chain' together multiple `ItemProcessor` implementations? This can be accomplished using
......@@ -220,7 +220,7 @@ public CompositeItemProcessor compositeProcessor() {
}
```
### [](#filteringRecords)Filtering Records
### Filtering Records
One typical use for an item processor is to filter out records before they are passed to
the `ItemWriter`. Filtering is an action distinct from skipping. Skipping indicates that
......@@ -239,7 +239,7 @@ that the result is `null` and avoids adding that item to the list of records del
the `ItemWriter`. As usual, an exception thrown from the `ItemProcessor` results in a
skip.
### [](#validatingInput)Validating Input
### Validating Input
In the [ItemReaders and ItemWriters](readersAndWriters.html#readersAndWriters) chapter, multiple approaches to parsing input have been
discussed. Each major implementation throws an exception if it is not 'well-formed'. The`FixedLengthTokenizer` throws an exception if a range of data is missing. Similarly,
......@@ -337,7 +337,7 @@ public BeanValidatingItemProcessor<Person> beanValidatingItemProcessor() throws
}
```
### [](#faultTolerant)Fault Tolerance
### Fault Tolerance
When a chunk is rolled back, items that have been cached during reading may be
reprocessed. If a step is configured to be fault tolerant (typically by using skip or
......
# Repeat
## [](#repeat)Repeat
## Repeat
XMLJavaBoth
### [](#repeatTemplate)RepeatTemplate
### RepeatTemplate
Batch processing is about repetitive actions, either as a simple optimization or as part
of a job. To strategize and generalize the repetition and to provide what amounts to an
......@@ -61,7 +61,7 @@ considerations intrinsic to the work being done in the callback. Others are effe
infinite loops as far as the callback is concerned and the completion decision is
delegated to an external policy, as in the case shown in the preceding example.
#### [](#repeatContext)RepeatContext
#### RepeatContext
The method parameter for the `RepeatCallback` is a `RepeatContext`. Many callbacks ignore
the context. However, if necessary, it can be used as an attribute bag to store transient
......@@ -73,7 +73,7 @@ parent context is occasionally useful for storing data that need to be shared be
calls to `iterate`. This is the case, for instance, if you want to count the number of
occurrences of an event in the iteration and remember it across subsequent calls.
#### [](#repeatStatus)RepeatStatus
#### RepeatStatus
`RepeatStatus` is an enumeration used by Spring Batch to indicate whether processing has
finished. It has two possible `RepeatStatus` values, described in the following table:
......@@ -86,7 +86,7 @@ finished. It has two possible `RepeatStatus` values, described in the following
`RepeatStatus` values can also be combined with a logical AND operation by using the`and()` method in `RepeatStatus`. The effect of this is to do a logical AND on the
continuable flag. In other words, if either status is `FINISHED`, then the result is`FINISHED`.
### [](#completionPolicies)Completion Policies
### Completion Policies
Inside a `RepeatTemplate`, the termination of the loop in the `iterate` method is
determined by a `CompletionPolicy`, which is also a factory for the `RepeatContext`. The`RepeatTemplate` has the responsibility to use the current policy to create a`RepeatContext` and pass that in to the `RepeatCallback` at every stage in the iteration.
......@@ -99,7 +99,7 @@ Users might need to implement their own completion policies for more complicated
decisions. For example, a batch processing window that prevents batch jobs from executing
once the online systems are in use would require a custom policy.
### [](#repeatExceptionHandling)Exception Handling
### Exception Handling
If there is an exception thrown inside a `RepeatCallback`, the `RepeatTemplate` consults
an `ExceptionHandler`, which can decide whether or not to re-throw the exception.
......@@ -127,7 +127,7 @@ called `useParent`. It is `false` by default, so the limit is only accounted for
current `RepeatContext`. When set to `true`, the limit is kept across sibling contexts in
a nested iteration (such as a set of chunks inside a step).
### [](#repeatListeners)Listeners
### Listeners
Often, it is useful to be able to receive additional callbacks for cross-cutting concerns
across a number of different iterations. For this purpose, Spring Batch provides the`RepeatListener` interface. The `RepeatTemplate` lets users register `RepeatListener`implementations, and they are given callbacks with the `RepeatContext` and `RepeatStatus`where available during the iteration.
......@@ -149,14 +149,14 @@ The `open` and `close` callbacks come before and after the entire iteration. `be
Note that, when there is more than one listener, they are in a list, so there is an
order. In this case, `open` and `before` are called in the same order while `after`,`onError`, and `close` are called in reverse order.
### [](#repeatParallelProcessing)Parallel Processing
### Parallel Processing
Implementations of `RepeatOperations` are not restricted to executing the callback
sequentially. It is quite important that some implementations are able to execute their
callbacks in parallel. To this end, Spring Batch provides the`TaskExecutorRepeatTemplate`, which uses the Spring `TaskExecutor` strategy to run the`RepeatCallback`. The default is to use a `SynchronousTaskExecutor`, which has the effect
of executing the whole iteration in the same thread (the same as a normal`RepeatTemplate`).
### [](#declarativeIteration)Declarative Iteration
### Declarative Iteration
Sometimes there is some business processing that you know you want to repeat every time
it happens. The classic example of this is the optimization of a message pipeline. It is
......
# Retry
## [](#retry)Retry
## Retry
XMLJavaBoth
......@@ -9,7 +9,7 @@ automatically retry a failed operation in case it might succeed on a subsequent
Errors that are susceptible to intermittent failure are often transient in nature.
Examples include remote calls to a web service that fails because of a network glitch or a`DeadlockLoserDataAccessException` in a database update.
### [](#retryTemplate)`RetryTemplate`
### `RetryTemplate`
| |The retry functionality was pulled out of Spring Batch as of 2.2.0.<br/>It is now part of a new library, [Spring Retry](https://github.com/spring-projects/spring-retry).|
|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
......@@ -75,7 +75,7 @@ Foo result = template.execute(new RetryCallback<Foo>() {
In the preceding example, we make a web service call and return the result to the user. If
that call fails, then it is retried until a timeout is reached.
#### [](#retryContext)`RetryContext`
#### `RetryContext`
The method parameter for the `RetryCallback` is a `RetryContext`. Many callbacks ignore
the context, but, if necessary, it can be used as an attribute bag to store data for the
......@@ -85,7 +85,7 @@ A `RetryContext` has a parent context if there is a nested retry in progress in
thread. The parent context is occasionally useful for storing data that need to be shared
between calls to `execute`.
#### [](#recoveryCallback)`RecoveryCallback`
#### `RecoveryCallback`
When a retry is exhausted, the `RetryOperations` can pass control to a different callback,
called the `RecoveryCallback`. To use this feature, clients pass in the callbacks together
......@@ -106,7 +106,7 @@ Foo foo = template.execute(new RetryCallback<Foo>() {
If the business logic does not succeed before the template decides to abort, then the
client is given the chance to do some alternate processing through the recovery callback.
#### [](#statelessRetry)Stateless Retry
#### Stateless Retry
In the simplest case, a retry is just a while loop. The `RetryTemplate` can just keep
trying until it either succeeds or fails. The `RetryContext` contains some state to
......@@ -115,7 +115,7 @@ to store it anywhere globally, so we call this stateless retry. The distinction
stateless and stateful retry is contained in the implementation of the `RetryPolicy` (the`RetryTemplate` can handle both). In a stateless retry, the retry callback is always
executed in the same thread it was on when it failed.
#### [](#statefulRetry)Stateful Retry
#### Stateful Retry
Where the failure has caused a transactional resource to become invalid, there are some
special considerations. This does not apply to a simple remote call because there is no
......@@ -154,7 +154,7 @@ The decision to retry or not is actually delegated to a regular `RetryPolicy`, s
usual concerns about limits and timeouts can be injected there (described later in this
chapter).
### [](#retryPolicies)Retry Policies
### Retry Policies
Inside a `RetryTemplate`, the decision to retry or fail in the `execute` method is
determined by a `RetryPolicy`, which is also a factory for the `RetryContext`. The`RetryTemplate` has the responsibility to use the current policy to create a`RetryContext` and pass that in to the `RetryCallback` at every attempt. After a callback
......@@ -206,7 +206,7 @@ Users might need to implement their own retry policies for more customized decis
instance, a custom retry policy makes sense when there is a well-known, solution-specific
classification of exceptions into retryable and not retryable.
### [](#backoffPolicies)Backoff Policies
### Backoff Policies
When retrying after a transient failure, it often helps to wait a bit before trying again,
because usually the failure is caused by some problem that can only be resolved by
......@@ -232,7 +232,7 @@ backoff with an exponentially increasing wait period, to avoid two retries getti
lock step and both failing (this is a lesson learned from ethernet). For this purpose,
Spring Batch provides the `ExponentialBackoffPolicy`.
### [](#retryListeners)Listeners
### Listeners
Often, it is useful to be able to receive additional callbacks for cross cutting concerns
across a number of different retries. For this purpose, Spring Batch provides the`RetryListener` interface. The `RetryTemplate` lets users register `RetryListeners`, and
......@@ -261,7 +261,7 @@ Note that, when there is more than one listener, they are in a list, so there is
In this case, `open` is called in the same order while `onError` and `close` are called in
reverse order.
### [](#declarativeRetry)Declarative Retry
### Declarative Retry
Sometimes, there is some business processing that you know you want to retry every time it
happens. The classic example of this is the remote service call. Spring Batch provides an
......
# Scaling and Parallel Processing
## [](#scalability)Scaling and Parallel Processing
## Scaling and Parallel Processing
XMLJavaBoth
......@@ -31,7 +31,7 @@ These break down into categories as well, as follows:
First, we review the single-process options. Then we review the multi-process options.
### [](#multithreadedStep)Multi-threaded Step
### Multi-threaded Step
The simplest way to start parallel processing is to add a `TaskExecutor` to your Step
configuration.
......@@ -128,7 +128,7 @@ synchronizing delegator. You can synchronize the call to `read()` and as long as
processing and writing is the most expensive part of the chunk, your step may still
complete much faster than it would in a single threaded configuration.
### [](#scalabilityParallelSteps)Parallel Steps
### Parallel Steps
As long as the application logic that needs to be parallelized can be split into distinct
responsibilities and assigned to individual steps, then it can be parallelized in a
......@@ -203,7 +203,7 @@ aggregating the exit statuses and transitioning.
See the section on [Split Flows](step.html#split-flows) for more detail.
### [](#remoteChunking)Remote Chunking
### Remote Chunking
In remote chunking, the `Step` processing is split across multiple processes,
communicating with each other through some middleware. The following image shows the
......@@ -233,7 +233,7 @@ the grid computing and shared memory product space.
See the section on[Spring Batch Integration - Remote Chunking](spring-batch-integration.html#remote-chunking)for more detail.
### [](#partitioning)Partitioning
### Partitioning
Spring Batch also provides an SPI for partitioning a `Step` execution and executing it
remotely. In this case, the remote participants are `Step` instances that could just as
......@@ -302,7 +302,7 @@ Spring Batch creates step executions for the partitions called "step1:partition0
on. Many people prefer to call the manager step "step1:manager" for consistency. You can
use an alias for the step (by specifying the `name` attribute instead of the `id`attribute).
#### [](#partitionHandler)PartitionHandler
#### PartitionHandler
The `PartitionHandler` is the component that knows about the fabric of the remoting or
grid environment. It is able to send `StepExecution` requests to the remote `Step`instances, wrapped in some fabric-specific format, like a DTO. It does not have to know
......@@ -371,7 +371,7 @@ copying large numbers of files or replicating filesystems into content managemen
systems. It can also be used for remote execution by providing a `Step` implementation
that is a proxy for a remote invocation (such as using Spring Remoting).
#### [](#partitioner)Partitioner
#### Partitioner
The `Partitioner` has a simpler responsibility: to generate execution contexts as input
parameters for new step executions only (no need to worry about restarts). It has a
......@@ -402,7 +402,7 @@ interface, then, on a restart, only the names are queried. If partitioning is ex
this can be a useful optimization. The names provided by the `PartitionNameProvider` must
match those provided by the `Partitioner`.
#### [](#bindingInputDataToSteps)Binding Input Data to Steps
#### Binding Input Data to Steps
It is very efficient for the steps that are executed by the `PartitionHandler` to have
identical configuration and for their input parameters to be bound at runtime from the`ExecutionContext`. This is easy to do with the StepScope feature of Spring Batch
......
# Meta-Data Schema
## [](#metaDataSchema)Appendix A: Meta-Data Schema
## Appendix A: Meta-Data Schema
### [](#metaDataSchemaOverview)Overview
### Overview
The Spring Batch Metadata tables closely match the Domain objects that represent them in
Java. For example, `JobInstance`, `JobExecution`, `JobParameters`, and `StepExecution`map to `BATCH_JOB_INSTANCE`, `BATCH_JOB_EXECUTION`, `BATCH_JOB_EXECUTION_PARAMS`, and`BATCH_STEP_EXECUTION`, respectively. `ExecutionContext` maps to both`BATCH_JOB_EXECUTION_CONTEXT` and `BATCH_STEP_EXECUTION_CONTEXT`. The `JobRepository` is
......@@ -18,7 +18,7 @@ shows an ERD model of all 6 tables and their relationships to one another:
Figure 1. Spring Batch Meta-Data ERD
#### [](#exampleDDLScripts)Example DDL Scripts
#### Example DDL Scripts
The Spring Batch Core JAR file contains example scripts to create the relational tables
for a number of database platforms (which are, in turn, auto-detected by the job
......@@ -27,7 +27,7 @@ modified with additional indexes and constraints as desired. The file names are
form `schema-*.sql`, where "\*" is the short name of the target database platform.
The scripts are in the package `org.springframework.batch.core`.
#### [](#migrationDDLScripts)Migration DDL Scripts
#### Migration DDL Scripts
Spring Batch provides migration DDL scripts that you need to execute when you upgrade versions.
These scripts can be found in the Core Jar file under `org/springframework/batch/core/migration`.
......@@ -37,7 +37,7 @@ Migration scripts are organized into folders corresponding to version numbers in
* `4.1`: contains scripts needed if you are migrating from a version before `4.1` to version `4.1`
#### [](#metaDataVersion)Version
#### Version
Many of the database tables discussed in this appendix contain a version column. This
column is important because Spring Batch employs an optimistic locking strategy when
......@@ -47,7 +47,7 @@ back to save the value, if the version number has changed it throws an`Optimisti
access. This check is necessary, since, even though different batch jobs may be running
in different machines, they all use the same database tables.
#### [](#metaDataIdentity)Identity
#### Identity
`BATCH_JOB_INSTANCE`, `BATCH_JOB_EXECUTION`, and `BATCH_STEP_EXECUTION` each contain
columns ending in `_ID`. These fields act as primary keys for their respective tables.
......@@ -80,7 +80,7 @@ INSERT INTO BATCH_JOB_SEQ values(0);
In the preceding case, a table is used in place of each sequence. The Spring core class,`MySQLMaxValueIncrementer`, then increments the one column in this sequence in order to
give similar functionality.
### [](#metaDataBatchJobInstance)`BATCH_JOB_INSTANCE`
### `BATCH_JOB_INSTANCE`
The `BATCH_JOB_INSTANCE` table holds all information relevant to a `JobInstance`, and
serves as the top of the overall hierarchy. The following generic DDL statement is used
......@@ -109,7 +109,7 @@ The following list describes each column in the table:
instances of the same job from one another. (`JobInstances` with the same job name must
have different `JobParameters` and, thus, different `JOB_KEY` values).
### [](#metaDataBatchJobParams)`BATCH_JOB_EXECUTION_PARAMS`
### `BATCH_JOB_EXECUTION_PARAMS`
The `BATCH_JOB_EXECUTION_PARAMS` table holds all information relevant to the`JobParameters` object. It contains 0 or more key/value pairs passed to a `Job` and
serves as a record of the parameters with which a job was run. For each parameter that
......@@ -159,7 +159,7 @@ Note that there is no primary key for this table. This is because the framework
use for one and, thus, does not require it. If need be, you can add a primary key may be
added with a database generated key without causing any issues to the framework itself.
### [](#metaDataBatchJobExecution)`BATCH_JOB_EXECUTION`
### `BATCH_JOB_EXECUTION`
The `BATCH_JOB_EXECUTION` table holds all information relevant to the `JobExecution`object. Every time a `Job` is run, there is always a new `JobExecution`, and a new row in
this table. The following listing shows the definition of the `BATCH_JOB_EXECUTION`table:
......@@ -213,7 +213,7 @@ The following list describes each column:
* `LAST_UPDATED`: Timestamp representing the last time this execution was persisted.
### [](#metaDataBatchStepExecution)`BATCH_STEP_EXECUTION`
### `BATCH_STEP_EXECUTION`
The BATCH\_STEP\_EXECUTION table holds all information relevant to the `StepExecution`object. This table is similar in many ways to the `BATCH_JOB_EXECUTION` table, and there
is always at least one entry per `Step` for each `JobExecution` created. The following
......@@ -293,7 +293,7 @@ The following list describes for each column:
* `LAST_UPDATED`: Timestamp representing the last time this execution was persisted.
### [](#metaDataBatchJobExecutionContext)`BATCH_JOB_EXECUTION_CONTEXT`
### `BATCH_JOB_EXECUTION_CONTEXT`
The `BATCH_JOB_EXECUTION_CONTEXT` table holds all information relevant to the`ExecutionContext` of a `Job`. There is exactly one `Job` `ExecutionContext` per`JobExecution`, and it contains all of the job-level data that is needed for a particular
job execution. This data typically represents the state that must be retrieved after a
......@@ -319,7 +319,7 @@ The following list describes each column:
* `SERIALIZED_CONTEXT`: The entire context, serialized.
### [](#metaDataBatchStepExecutionContext)`BATCH_STEP_EXECUTION_CONTEXT`
### `BATCH_STEP_EXECUTION_CONTEXT`
The `BATCH_STEP_EXECUTION_CONTEXT` table holds all information relevant to the`ExecutionContext` of a `Step`. There is exactly one `ExecutionContext` per`StepExecution`, and it contains all of the data that
needs to be persisted for a particular step execution. This data typically represents the
......@@ -345,7 +345,7 @@ The following list describes each column:
* `SERIALIZED_CONTEXT`: The entire context, serialized.
### [](#metaDataArchiving)Archiving
### Archiving
Because there are entries in multiple tables every time a batch job is run, it is common
to create an archive strategy for the metadata tables. The tables themselves are designed
......@@ -362,7 +362,7 @@ job, with a few notable exceptions pertaining to restart:
this table for jobs that have not completed successfully prevents them from starting at
the correct point if run again.
### [](#multiByteCharacters)International and Multi-byte Characters
### International and Multi-byte Characters
If you are using multi-byte character sets (such as Chinese or Cyrillic) in your business
processing, then those characters might need to be persisted in the Spring Batch schema.
......@@ -370,7 +370,7 @@ Many users find that simply changing the schema to double the length of the `VAR
value of the `VARCHAR` column length. Some users have also reported that they use`NVARCHAR` in place of `VARCHAR` in their schema definitions. The best result depends on
the database platform and the way the database server has been configured locally.
### [](#recommendationsForIndexingMetaDataTables)Recommendations for Indexing Meta Data Tables
### Recommendations for Indexing Meta Data Tables
Spring Batch provides DDL samples for the metadata tables in the core jar file for
several common database platforms. Index declarations are not included in that DDL,
......
# Spring Batch Integration
## [](#springBatchIntegration)Spring Batch Integration
## Spring Batch Integration
XMLJavaBoth
### [](#spring-batch-integration-introduction)Spring Batch Integration Introduction
### Spring Batch Integration Introduction
Many users of Spring Batch may encounter requirements that are
outside the scope of Spring Batch but that may be efficiently and
......@@ -44,7 +44,7 @@ This section covers the following key concepts:
* [Externalizing
Batch Process Execution](#externalizing-batch-process-execution)
#### [](#namespace-support)Namespace Support
#### Namespace Support
Since Spring Batch Integration 1.3, dedicated XML Namespace
support was added, with the aim to provide an easier configuration
......@@ -97,7 +97,7 @@ could possibly create issues when updating the Spring Batch
Integration dependencies, as they may require more recent versions
of the XML schema.
#### [](#launching-batch-jobs-through-messages)Launching Batch Jobs through Messages
#### Launching Batch Jobs through Messages
When starting batch jobs by using the core Spring Batch API, you
basically have 2 options:
......@@ -138,7 +138,7 @@ message flow in order to start a Batch job. The[EIP (Enterprise Integration Patt
Figure 1. Launch Batch Job
##### [](#transforming-a-file-into-a-joblaunchrequest)Transforming a file into a JobLaunchRequest
##### Transforming a file into a JobLaunchRequest
```
package io.spring.sbi;
......@@ -176,7 +176,7 @@ public class FileMessageToJobRequest {
}
```
##### [](#the-jobexecution-response)The `JobExecution` Response
##### The `JobExecution` Response
When a batch job is being executed, a`JobExecution` instance is returned. This
instance can be used to determine the status of an execution. If
......@@ -191,7 +191,7 @@ using the `JobExplorer`. For more
information, please refer to the Spring
Batch reference documentation on[Querying the Repository](job.html#queryingRepository).
##### [](#spring-batch-integration-configuration)Spring Batch Integration Configuration
##### Spring Batch Integration Configuration
Consider a case where someone needs to create a file `inbound-channel-adapter` to listen
for CSV files in the provided directory, hand them off to a transformer
......@@ -263,7 +263,7 @@ public IntegrationFlow integrationFlow(JobLaunchingGateway jobLaunchingGateway)
}
```
##### [](#example-itemreader-configuration)Example ItemReader Configuration
##### Example ItemReader Configuration
Now that we are polling for files and launching jobs, we need to configure our Spring
Batch `ItemReader` (for example) to use the files found at the location defined by the job
......@@ -301,7 +301,7 @@ The main points of interest in the preceding example are injecting the value of`
to have *Step scope*. Setting the bean to have Step scope takes advantage of
the late binding support, which allows access to the`jobParameters` variable.
### [](#availableAttributesOfTheJobLaunchingGateway)Available Attributes of the Job-Launching Gateway
### Available Attributes of the Job-Launching Gateway
The job-launching gateway has the following attributes that you can set to control a job:
......@@ -338,7 +338,7 @@ The job-launching gateway has the following attributes that you can set to contr
* `order`: Specifies the order of invocation when this endpoint is connected as a subscriber
to a `SubscribableChannel`.
### [](#sub-elements)Sub-Elements
### Sub-Elements
When this `Gateway` is receiving messages from a`PollableChannel`, you must either provide
a global default `Poller` or provide a `Poller` sub-element to the`Job Launching Gateway`.
......@@ -368,7 +368,7 @@ public JobLaunchingGateway sampleJobLaunchingGateway() {
}
```
#### [](#providing-feedback-with-informational-messages)Providing Feedback with Informational Messages
#### Providing Feedback with Informational Messages
As Spring Batch jobs can run for long times, providing progress
information is often critical. For example, stake-holders may want
......@@ -477,7 +477,7 @@ public Job importPaymentsJob() {
}
```
#### [](#asynchronous-processors)Asynchronous Processors
#### Asynchronous Processors
Asynchronous Processors help you to scale the processing of items. In the asynchronous
processor use case, an `AsyncItemProcessor` serves as a dispatcher, executing the logic of
......@@ -549,7 +549,7 @@ public AsyncItemWriter writer(ItemWriter itemWriter) {
Again, the `delegate` property is
actually a reference to your `ItemWriter` bean.
#### [](#externalizing-batch-process-execution)Externalizing Batch Process Execution
#### Externalizing Batch Process Execution
The integration approaches discussed so far suggest use cases
where Spring Integration wraps Spring Batch like an outer-shell.
......@@ -563,7 +563,7 @@ provides dedicated support for:
* Remote Partitioning
##### [](#remote-chunking)Remote Chunking
##### Remote Chunking
![Remote Chunking](./images/remote-chunking-sbi.png)
......@@ -922,7 +922,7 @@ public class RemoteChunkingJobConfiguration {
You can find a complete example of a remote chunking job[here](https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples#remote-chunking-sample).
##### [](#remote-partitioning)Remote Partitioning
##### Remote Partitioning
![Remote Partitioning](./images/remote-partitioning.png)
......
# Spring Batch Introduction
## [](#spring-batch-intro)Spring Batch Introduction
## Spring Batch Introduction
Many applications within the enterprise domain require bulk processing to perform
business operations in mission critical environments. These business operations include:
......@@ -37,7 +37,7 @@ as complex, high volume use cases (such as moving high volumes of data between d
transforming it, and so on). High-volume batch jobs can leverage the framework in a
highly scalable manner to process significant volumes of information.
### [](#springBatchBackground)Background
### Background
While open source software projects and associated communities have focused greater
attention on web-based and microservices-based architecture frameworks, there has been a
......@@ -69,7 +69,7 @@ consistently leveraged by enterprise users when creating batch applications. Com
and government agencies desiring to deliver standard, proven solutions to their
enterprise IT environments can benefit from Spring Batch.
### [](#springBatchUsageScenarios)Usage Scenarios
### Usage Scenarios
A typical batch program generally:
......@@ -125,7 +125,7 @@ Technical Objectives
* Provide a simple deployment model, with the architecture JARs completely separate from
the application, built using Maven.
### [](#springBatchArchitecture)Spring Batch Architecture
### Spring Batch Architecture
Spring Batch is designed with extensibility and a diverse group of end users in mind. The
figure below shows the layered architecture that supports the extensibility and ease of
......@@ -144,7 +144,7 @@ infrastructure. This infrastructure contains common readers and writers and serv
writers, such as `ItemReader` and `ItemWriter`) and the core framework itself (retry,
which is its own library).
### [](#batchArchitectureConsiderations)General Batch Principles and Guidelines
### General Batch Principles and Guidelines
The following key principles, guidelines, and general considerations should be considered
when building a batch solution.
......@@ -199,7 +199,7 @@ when building a batch solution.
If the system depends on flat files, file backup procedures should not only be in place
and documented but be regularly tested as well.
### [](#batchProcessingStrategy)Batch Processing Strategies
### Batch Processing Strategies
To help design and implement batch systems, basic batch application building blocks and
patterns should be provided to the designers and programmers in the form of sample
......
# Configuring a Step
## [](#configureStep)Configuring a `Step`
## Configuring a `Step`
XMLJavaBoth
......@@ -17,7 +17,7 @@ processing, as shown in the following image:
Figure 1. Step
### [](#chunkOrientedProcessing)Chunk-oriented Processing
### Chunk-oriented Processing
Spring Batch uses a 'Chunk-oriented' processing style within its most common
implementation. Chunk oriented processing refers to reading the data one at a time and
......@@ -73,7 +73,7 @@ itemWriter.write(processedItems);
For more details about item processors and their use cases, please refer to the[Item processing](processor.html#itemProcessor) section.
#### [](#configuringAStep)Configuring a `Step`
#### Configuring a `Step`
Despite the relatively short list of required dependencies for a `Step`, it is an
extremely complex class that can potentially contain many collaborators.
......@@ -159,7 +159,7 @@ optional, since the item could be directly passed from the reader to the writer.
It should be noted that `repository` defaults to `jobRepository` and `transactionManager`defaults to `transactionManager` (all provided through the infrastructure from`@EnableBatchProcessing`). Also, the `ItemProcessor` is optional, since the item could be
directly passed from the reader to the writer.
#### [](#InheritingFromParentStep)Inheriting from a Parent `Step`
#### Inheriting from a Parent `Step`
If a group of `Steps` share similar configurations, then it may be helpful to define a
"parent" `Step` from which the concrete `Steps` may inherit properties. Similar to class
......@@ -193,7 +193,7 @@ reasons:
* When creating job flows, as described later in this chapter, the `next` attribute
should be referring to the step in the flow, not the standalone step.
##### [](#abstractStep)Abstract `Step`
##### Abstract `Step`
Sometimes, it may be necessary to define a parent `Step` that is not a complete `Step`configuration. If, for instance, the `reader`, `writer`, and `tasklet` attributes are
left off of a `Step` configuration, then initialization fails. If a parent must be
......@@ -217,7 +217,7 @@ were not declared to be abstract. The `Step`, "concreteStep2", has 'itemReader',
</step>
```
##### [](#mergingListsOnStep)Merging Lists
##### Merging Lists
Some of the configurable elements on `Steps` are lists, such as the `<listeners/>` element.
If both the parent and child `Steps` declare a `<listeners/>` element, then the
......@@ -245,7 +245,7 @@ In the following example, the `Step` "concreteStep3", is created with two listen
</step>
```
#### [](#commitInterval)The Commit Interval
#### The Commit Interval
As mentioned previously, a step reads in and writes out items, periodically committing
using the supplied `PlatformTransactionManager`. With a `commit-interval` of 1, it
......@@ -296,12 +296,12 @@ In the preceding example, 10 items are processed within each transaction. At the
beginning of processing, a transaction is begun. Also, each time `read` is called on the`ItemReader`, a counter is incremented. When it reaches 10, the list of aggregated items
is passed to the `ItemWriter`, and the transaction is committed.
#### [](#stepRestart)Configuring a `Step` for Restart
#### Configuring a `Step` for Restart
In the "[Configuring and Running a Job](job.html#configureJob)" section , restarting a`Job` was discussed. Restart has numerous impacts on steps, and, consequently, may
require some specific configuration.
##### [](#startLimit)Setting a Start Limit
##### Setting a Start Limit
There are many scenarios where you may want to control the number of times a `Step` may
be started. For example, a particular `Step` might need to be configured so that it only
......@@ -342,7 +342,7 @@ The step shown in the preceding example can be run only once. Attempting to run
causes a `StartLimitExceededException` to be thrown. Note that the default value for the
start-limit is `Integer.MAX_VALUE`.
##### [](#allowStartIfComplete)Restarting a Completed `Step`
##### Restarting a Completed `Step`
In the case of a restartable job, there may be one or more steps that should always be
run, regardless of whether or not they were successful the first time. An example might
......@@ -379,7 +379,7 @@ public Step step1() {
}
```
##### [](#stepRestartExample)`Step` Restart Configuration Example
##### `Step` Restart Configuration Example
The following XML example shows how to configure a job to have steps that can be
restarted:
......@@ -509,7 +509,7 @@ Run 3:
the third execution of `playerSummarization`, and its limit is only 2. Either the limit
must be raised or the `Job` must be executed as a new `JobInstance`.
#### [](#configuringSkip)Configuring Skip Logic
#### Configuring Skip Logic
There are many scenarios where errors encountered while processing should not result in`Step` failure, but should be skipped instead. This is usually a decision that must be
made by someone who understands the data itself and what meaning it has. Financial data,
......@@ -615,7 +615,7 @@ The order of the `<include/>` and `<exclude/>` elements does not matter.
The order of the `skip` and `noSkip` method calls does not matter.
#### [](#retryLogic)Configuring Retry Logic
#### Configuring Retry Logic
In most cases, you want an exception to cause either a skip or a `Step` failure. However,
not all exceptions are deterministic. If a `FlatFileParseException` is encountered while
......@@ -658,7 +658,7 @@ public Step step1() {
The `Step` allows a limit for the number of times an individual item can be retried and a
list of exceptions that are 'retryable'. More details on how retry works can be found in[retry](retry.html#retry).
#### [](#controllingRollback)Controlling Rollback
#### Controlling Rollback
By default, regardless of retry or skip, any exceptions thrown from the `ItemWriter`cause the transaction controlled by the `Step` to rollback. If skip is configured as
described earlier, exceptions thrown from the `ItemReader` do not cause a rollback.
......@@ -699,7 +699,7 @@ public Step step1() {
}
```
##### [](#transactionalReaders)Transactional Readers
##### Transactional Readers
The basic contract of the `ItemReader` is that it is forward only. The step buffers
reader input, so that in the case of a rollback, the items do not need to be re-read
......@@ -738,7 +738,7 @@ public Step step1() {
}
```
#### [](#transactionAttributes)Transaction Attributes
#### Transaction Attributes
Transaction attributes can be used to control the `isolation`, `propagation`, and`timeout` settings. More information on setting transaction attributes can be found in
the[Spring
......@@ -782,7 +782,7 @@ public Step step1() {
}
```
#### [](#registeringItemStreams)Registering `ItemStream` with a `Step`
#### Registering `ItemStream` with a `Step`
The step has to take care of `ItemStream` callbacks at the necessary points in its
lifecycle (For more information on the `ItemStream` interface, see[ItemStream](readersAndWriters.html#itemStream)). This is vital if a step fails and might
......@@ -861,7 +861,7 @@ explicitly registered as a stream because it is a direct property of the `Step`.
is now restartable, and the state of the reader and writer is correctly persisted in the
event of a failure.
#### [](#interceptingStepExecution)Intercepting `Step` Execution
#### Intercepting `Step` Execution
Just as with the `Job`, there are many events during the execution of a `Step` where a
user may need to perform some functionality. For example, in order to write out to a flat
......@@ -918,7 +918,7 @@ custom implementations of chunk components such as `ItemReader` or `ItemWriter`
as well as registered with the `listener` methods in the builders, so all you need to do
is use the XML namespace or builders to register the listeners with a step.
##### [](#stepExecutionListener)`StepExecutionListener`
##### `StepExecutionListener`
`StepExecutionListener` represents the most generic listener for `Step` execution. It
allows for notification before a `Step` is started and after it ends, whether it ended
......@@ -943,7 +943,7 @@ The annotations corresponding to this interface are:
* `@AfterStep`
##### [](#chunkListener)`ChunkListener`
##### `ChunkListener`
A chunk is defined as the items processed within the scope of a transaction. Committing a
transaction, at each commit interval, commits a 'chunk'. A `ChunkListener` can be used to
......@@ -976,7 +976,7 @@ A `ChunkListener` can be applied when there is no chunk declaration. The `Taskle
responsible for calling the `ChunkListener`, so it applies to a non-item-oriented tasklet
as well (it is called before and after the tasklet).
##### [](#itemReadListener)`ItemReadListener`
##### `ItemReadListener`
When discussing skip logic previously, it was mentioned that it may be beneficial to log
the skipped records, so that they can be dealt with later. In the case of read errors,
......@@ -1005,7 +1005,7 @@ The annotations corresponding to this interface are:
* `@OnReadError`
##### [](#itemProcessListener)`ItemProcessListener`
##### `ItemProcessListener`
Just as with the `ItemReadListener`, the processing of an item can be 'listened' to, as
shown in the following interface definition:
......@@ -1033,7 +1033,7 @@ The annotations corresponding to this interface are:
* `@OnProcessError`
##### [](#itemWriteListener)`ItemWriteListener`
##### `ItemWriteListener`
The writing of an item can be 'listened' to with the `ItemWriteListener`, as shown in the
following interface definition:
......@@ -1062,7 +1062,7 @@ The annotations corresponding to this interface are:
* `@OnWriteError`
##### [](#skipListener)`SkipListener`
##### `SkipListener`
`ItemReadListener`, `ItemProcessListener`, and `ItemWriteListener` all provide mechanisms
for being notified of errors, but none informs you that a record has actually been
......@@ -1093,7 +1093,7 @@ The annotations corresponding to this interface are:
* `@OnSkipInProcess`
###### [](#skipListenersAndTransactions)SkipListeners and Transactions
###### SkipListeners and Transactions
One of the most common use cases for a `SkipListener` is to log out a skipped item, so
that another batch process or even human process can be used to evaluate and fix the
......@@ -1107,7 +1107,7 @@ may be rolled back, Spring Batch makes two guarantees:
to ensure that any transactional resources call by the listener are not rolled back by a
failure within the `ItemWriter`.
### [](#taskletStep)`TaskletStep`
### `TaskletStep`
[Chunk-oriented processing](#chunkOrientedProcessing) is not the only way to process in a`Step`. What if a `Step` must consist of a simple stored procedure call? You could
implement the call as an `ItemReader` and return null after the procedure finishes.
......@@ -1145,7 +1145,7 @@ public Step step1() {
| |`TaskletStep` automatically registers the<br/>tasklet as a `StepListener` if it implements the `StepListener`interface.|
|---|-----------------------------------------------------------------------------------------------------------------------|
#### [](#taskletAdapter)`TaskletAdapter`
#### `TaskletAdapter`
As with other adapters for the `ItemReader` and `ItemWriter` interfaces, the `Tasklet`interface contains an implementation that allows for adapting itself to any pre-existing
class: `TaskletAdapter`. An example where this may be useful is an existing DAO that is
......@@ -1181,7 +1181,7 @@ public MethodInvokingTaskletAdapter myTasklet() {
}
```
#### [](#exampleTaskletImplementation)Example `Tasklet` Implementation
#### Example `Tasklet` Implementation
Many batch jobs contain steps that must be done before the main processing begins in
order to set up various resources or after processing has completed to cleanup those
......@@ -1276,7 +1276,7 @@ public FileDeletingTasklet fileDeletingTasklet() {
}
```
### [](#controllingStepFlow)Controlling Step Flow
### Controlling Step Flow
With the ability to group steps together within an owning job comes the need to be able
to control how the job "flows" from one step to another. The failure of a `Step` does not
......@@ -1284,7 +1284,7 @@ necessarily mean that the `Job` should fail. Furthermore, there may be more than
of 'success' that determines which `Step` should be executed next. Depending upon how a
group of `Steps` is configured, certain steps may not even be processed at all.
#### [](#SequentialFlow)Sequential Flow
#### Sequential Flow
The simplest flow scenario is a job where all of the steps execute sequentially, as shown
in the following image:
......@@ -1329,7 +1329,7 @@ then the entire `Job` fails and 'step B' does not execute.
| |With the Spring Batch XML namespace, the first step listed in the configuration is*always* the first step run by the `Job`. The order of the other step elements does not<br/>matter, but the first step must always appear first in the xml.|
|---|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#conditionalFlow)Conditional Flow
#### Conditional Flow
In the example above, there are only two possibilities:
......@@ -1407,7 +1407,7 @@ transitions from most specific to least specific. This means that, even if the o
were swapped for "stepA" in the example above, an `ExitStatus` of "FAILED" would still go
to "stepC".
##### [](#batchStatusVsExitStatus)Batch Status Versus Exit Status
##### Batch Status Versus Exit Status
When configuring a `Job` for conditional flow, it is important to understand the
difference between `BatchStatus` and `ExitStatus`. `BatchStatus` is an enumeration that
......@@ -1503,7 +1503,7 @@ The above code is a `StepExecutionListener` that first checks to make sure the `
successful and then checks to see if the skip count on the `StepExecution` is higher than
0. If both conditions are met, a new `ExitStatus` with an exit code of`COMPLETED WITH SKIPS` is returned.
#### [](#configuringForStop)Configuring for Stop
#### Configuring for Stop
After the discussion of [BatchStatus and ExitStatus](#batchStatusVsExitStatus),
one might wonder how the `BatchStatus` and `ExitStatus` are determined for the `Job`.
......@@ -1547,7 +1547,7 @@ important to note that the stop transition elements have no effect on either the
final statuses of the `Job`. For example, it is possible for every step in a job to have
a status of `FAILED` but for the job to have a status of `COMPLETED`.
##### [](#endElement)Ending at a Step
##### Ending at a Step
Configuring a step end instructs a `Job` to stop with a `BatchStatus` of `COMPLETED`. A`Job` that has finished with status `COMPLETED` cannot be restarted (the framework throws
a `JobInstanceAlreadyCompleteException`).
......@@ -1590,7 +1590,7 @@ public Job job() {
}
```
##### [](#failElement)Failing a Step
##### Failing a Step
Configuring a step to fail at a given point instructs a `Job` to stop with a`BatchStatus` of `FAILED`. Unlike end, the failure of a `Job` does not prevent the `Job`from being restarted.
......@@ -1632,7 +1632,7 @@ public Job job() {
}
```
##### [](#stopElement)Stopping a Job at a Given Step
##### Stopping a Job at a Given Step
Configuring a job to stop at a particular step instructs a `Job` to stop with a`BatchStatus` of `STOPPED`. Stopping a `Job` can provide a temporary break in processing,
so that the operator can take some action before restarting the `Job`.
......@@ -1668,7 +1668,7 @@ public Job job() {
}
```
#### [](#programmaticFlowDecisions)Programmatic Flow Decisions
#### Programmatic Flow Decisions
In some situations, more information than the `ExitStatus` may be required to decide
which step to execute next. In this case, a `JobExecutionDecider` can be used to assist
......@@ -1727,7 +1727,7 @@ public Job job() {
}
```
#### [](#split-flows)Split Flows
#### Split Flows
Every scenario described so far has involved a `Job` that executes its steps one at a
time in a linear fashion. In addition to this typical style, Spring Batch also allows
......@@ -1785,7 +1785,7 @@ public Job job(Flow flow1, Flow flow2) {
}
```
#### [](#external-flows)Externalizing Flow Definitions and Dependencies Between Jobs
#### Externalizing Flow Definitions and Dependencies Between Jobs
Part of the flow in a job can be externalized as a separate bean definition and then
re-used. There are two ways to do so. The first is to simply declare the flow as a
......@@ -1905,7 +1905,7 @@ jobs and steps. Using `JobStep` is also often a good answer to the question: "Ho
create dependencies between jobs?" It is a good way to break up a large system into
smaller modules and control the flow of jobs.
### [](#late-binding)Late Binding of `Job` and `Step` Attributes
### Late Binding of `Job` and `Step` Attributes
Both the XML and flat file examples shown earlier use the Spring `Resource` abstraction
to obtain a file. This works because `Resource` has a `getFile` method, which returns a`java.io.File`. Both XML and flat file resources can be configured using standard Spring
......@@ -2060,7 +2060,7 @@ public FlatFileItemReader flatFileItemReader(@Value("#{stepExecutionContext['inp
| |If you are using Spring 3.0 (or above), the expressions in step-scoped beans are in the<br/>Spring Expression Language, a powerful general purpose language with many interesting<br/>features. To provide backward compatibility, if Spring Batch detects the presence of<br/>older versions of Spring, it uses a native expression language that is less powerful and<br/>that has slightly different parsing rules. The main difference is that the map keys in<br/>the example above do not need to be quoted with Spring 2.5, but the quotes are mandatory<br/>in Spring 3.0.|
|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#step-scope)Step Scope
#### Step Scope
All of the late binding examples shown earlier have a scope of “step” declared on the
bean definition.
......@@ -2114,7 +2114,7 @@ The following example includes the bean definition explicitly:
<bean class="org.springframework.batch.core.scope.StepScope" />
```
#### [](#job-scope)Job Scope
#### Job Scope
`Job` scope, introduced in Spring Batch 3.0, is similar to `Step` scope in configuration
but is a Scope for the `Job` context, so that there is only one instance of such a bean
......
# Unit Testing
## [](#testing)Unit Testing
## Unit Testing
XMLJavaBoth
......@@ -11,7 +11,7 @@ to think about how to 'end to end' test a batch job, which is what this chapter
The spring-batch-test project includes classes that facilitate this end-to-end test
approach.
### [](#creatingUnitTestClass)Creating a Unit Test Class
### Creating a Unit Test Class
In order for the unit test to run a batch job, the framework must load the job’s
ApplicationContext. Two annotations are used to trigger this behavior:
......@@ -51,7 +51,7 @@ Using XML Configuration
public class SkipSampleFunctionalTests { ... }
```
### [](#endToEndTesting)End-To-End Testing of Batch Jobs
### End-To-End Testing of Batch Jobs
'End To End' testing can be defined as testing the complete run of a batch job from
beginning to end. This allows for a test that sets up a test condition, executes the job,
......@@ -135,7 +135,7 @@ public class SkipSampleFunctionalTests {
}
```
### [](#testingIndividualSteps)Testing Individual Steps
### Testing Individual Steps
For complex batch jobs, test cases in the end-to-end testing approach may become
unmanageable. It these cases, it may be more useful to have test cases to test individual
......@@ -148,7 +148,7 @@ results directly. The following example shows how to use the `launchStep` method
JobExecution jobExecution = jobLauncherTestUtils.launchStep("loadFileStep");
```
### [](#testing-step-scoped-components)Testing Step-Scoped Components
### Testing Step-Scoped Components
Often, the components that are configured for your steps at runtime use step scope and
late binding to inject context from the step or job execution. These are tricky to test as
......@@ -243,7 +243,7 @@ int count = StepScopeTestUtils.doInStepScope(stepExecution,
});
```
### [](#validatingOutputFiles)Validating Output Files
### Validating Output Files
When a batch job writes to the database, it is easy to query the database to verify that
the output is as expected. However, if the batch job writes to a file, it is equally
......@@ -260,7 +260,7 @@ AssertFile.assertFileEquals(new FileSystemResource(EXPECTED_FILE),
new FileSystemResource(OUTPUT_FILE));
```
### [](#mockingDomainObjects)Mocking Domain Objects
### Mocking Domain Objects
Another common issue encountered while writing unit and integration tests for Spring Batch
components is how to mock domain objects. A good example is a `StepExecutionListener`, as
......
# Batch Processing and Transactions
## [](#transactions)Appendix A: Batch Processing and Transactions
## Appendix A: Batch Processing and Transactions
### [](#transactionsNoRetry)Simple Batching with No Retry
### Simple Batching with No Retry
Consider the following simple example of a nested batch with no retries. It shows a
common scenario for batch processing: An input source is processed until exhausted, and
......@@ -29,7 +29,7 @@ be either transactional or idempotent.
If the chunk at `REPEAT` (3) fails because of a database exception at 3.2, then `TX` (2)
must roll back the whole chunk.
### [](#transactionStatelessRetry)Simple Stateless Retry
### Simple Stateless Retry
It is also useful to use a retry for an operation which is not transactional, such as a
call to a web-service or other remote resource, as shown in the following example:
......@@ -50,7 +50,7 @@ access (2.1) eventually succeeds, the transaction, `TX` (0), commits. If the rem
access (2.1) eventually fails, then the transaction, `TX` (0), is guaranteed to roll
back.
### [](#repeatRetry)Typical Repeat-Retry Pattern
### Typical Repeat-Retry Pattern
The most typical batch processing pattern is to add a retry to the inner block of the
chunk, as shown in the following example:
......@@ -121,7 +121,7 @@ consecutive attempts but not necessarily at the same item. This is consistent wi
overall retry strategy. The inner `RETRY` (4) is aware of the history of each item and
can decide whether or not to have another attempt at it.
### [](#asyncChunkProcessing)Asynchronous Chunk Processing
### Asynchronous Chunk Processing
The inner batches or chunks in the [typical example](#repeatRetry) can be executed
concurrently by configuring the outer batch to use an `AsyncTaskExecutor`. The outer
......@@ -148,7 +148,7 @@ asynchronous chunk processing:
| }
```
### [](#asyncItemProcessing)Asynchronous Item Processing
### Asynchronous Item Processing
The individual items in chunks in the [typical example](#repeatRetry) can also, in
principle, be processed concurrently. In this case, the transaction boundary has to move
......@@ -179,7 +179,7 @@ This plan sacrifices the optimization benefit, which the simple plan had, of hav
the transactional resources chunked together. It is only useful if the cost of the
processing (5) is much higher than the cost of transaction management (3).
### [](#transactionPropagation)Interactions Between Batching and Transaction Propagation
### Interactions Between Batching and Transaction Propagation
There is a tighter coupling between batch-retry and transaction management than we would
ideally like. In particular, a stateless retry cannot be used to retry database
......@@ -241,7 +241,7 @@ What about non-default propagation?
Consequently, the `NESTED` pattern is best if the retry block contains any database
access.
### [](#specialTransactionOrthogonal)Special Case: Transactions with Orthogonal Resources
### Special Case: Transactions with Orthogonal Resources
Default propagation is always OK for simple cases where there are no nested database
transactions. Consider the following example, where the `SESSION` and `TX` are not
......@@ -264,7 +264,7 @@ starts. There is no database access outside the `RETRY` (2) block. If `TX` (3) f
then eventually succeeds on a retry, `SESSION` (0) can commit (independently of a `TX`block). This is similar to the vanilla "best-efforts-one-phase-commit" scenario. The
worst that can happen is a duplicate message when the `RETRY` (2) succeeds and the`SESSION` (0) cannot commit (for example, because the message system is unavailable).
### [](#statelessRetryCannotRecover)Stateless Retry Cannot Recover
### Stateless Retry Cannot Recover
The distinction between a stateless and a stateful retry in the typical example above is
important. It is actually ultimately a transactional constraint that forces the
......
# What’s New in Spring Batch 4.3
## [](#whatsNew)What’s New in Spring Batch 4.3
## What’s New in Spring Batch 4.3
This release comes with a number of new features, performance improvements,
dependency updates and API deprecations. This section describes the most
important changes. For a complete list of changes, please refer to the[release notes](https://github.com/spring-projects/spring-batch/releases/tag/4.3.0).
### [](#newFeatures)New features
### New features
#### [](#new-synchronized-itemstreamwriter)New synchronized ItemStreamWriter
#### New synchronized ItemStreamWriter
Similar to the `SynchronizedItemStreamReader`, this release introduces a`SynchronizedItemStreamWriter`. This feature is useful in multi-threaded steps
where concurrent threads need to be synchronized to not override each other’s writes.
#### [](#new-jpaqueryprovider-for-named-queries)New JpaQueryProvider for named queries
#### New JpaQueryProvider for named queries
This release introduces a new `JpaNamedQueryProvider` next to the`JpaNativeQueryProvider` to ease the configuration of JPA named queries when
using the `JpaPagingItemReader`:
......@@ -26,22 +26,22 @@ JpaPagingItemReader<Foo> reader = new JpaPagingItemReaderBuilder<Foo>()
.build();
```
#### [](#new-jpacursoritemreader-implementation)New JpaCursorItemReader Implementation
#### New JpaCursorItemReader Implementation
JPA 2.2 added the ability to stream results as a cursor instead of only paging.
This release introduces a new JPA item reader that uses this feature to
stream results in a cursor-based fashion similar to the `JdbcCursorItemReader`and `HibernateCursorItemReader`.
#### [](#new-jobparametersincrementer-implementation)New JobParametersIncrementer implementation
#### New JobParametersIncrementer implementation
Similar to the `RunIdIncrementer`, this release adds a new `JobParametersIncrementer`that is based on a `DataFieldMaxValueIncrementer` from Spring Framework.
#### [](#graalvm-support)GraalVM Support
#### GraalVM Support
This release adds initial support to run Spring Batch applications on GraalVM.
The support is still experimental and will be improved in future releases.
#### [](#java-records-support)Java records Support
#### Java records Support
This release adds support to use Java records as items in chunk-oriented steps.
The newly added `RecordFieldSetMapper` supports data mapping from flat files to
......@@ -69,29 +69,29 @@ public record Person(int id, String name) { }
The `FlatFileItemReader` uses the new `RecordFieldSetMapper` to map data from
the `persons.csv` file to records of type `Person`.
### [](#performanceImprovements)Performance improvements
### Performance improvements
#### [](#use-bulk-writes-in-repositoryitemwriter)Use bulk writes in RepositoryItemWriter
#### Use bulk writes in RepositoryItemWriter
Up to version 4.2, in order to use `CrudRepository#saveAll` in `RepositoryItemWriter`,
it was required to extend the writer and override `write(List)`.
In this release, the `RepositoryItemWriter` has been updated to use`CrudRepository#saveAll` by default.
#### [](#use-bulk-writes-in-mongoitemwriter)Use bulk writes in MongoItemWriter
#### Use bulk writes in MongoItemWriter
The `MongoItemWriter` used `MongoOperations#save()` in a for loop
to save items to the database. In this release, this writer has been
updated to use `org.springframework.data.mongodb.core.BulkOperations` instead.
#### [](#job-startrestart-time-improvement)Job start/restart time improvement
#### Job start/restart time improvement
The implementation of `JobRepository#getStepExecutionCount()` used to load
all job executions and step executions in-memory to do the count on the framework
side. In this release, the implementation has been changed to do a single call to
the database with a SQL count query in order to count step executions.
### [](#dependencyUpdates)Dependency updates
### Dependency updates
This release updates dependent Spring projects to the following versions:
......@@ -107,9 +107,9 @@ This release updates dependent Spring projects to the following versions:
* Micrometer 1.5
### [](#deprecation)Deprecations
### Deprecations
#### [](#apiDeprecation)API deprecation
#### API deprecation
The following is a list of APIs that have been deprecated in this release:
......@@ -139,7 +139,7 @@ The following is a list of APIs that have been deprecated in this release:
Suggested replacements can be found in the Javadoc of each deprecated API.
#### [](#sqlfireDeprecation)SQLFire support deprecation
#### SQLFire support deprecation
SQLFire has been in [EOL](https://www.vmware.com/latam/products/pivotal-sqlfire.html)since November 1st, 2014. This release deprecates the support of using SQLFire
as a job repository and schedules it for removal in version 5.0.
\ No newline at end of file
# AMQP Support
## [](#amqp)AMQP Support
## AMQP Support
Spring Integration provides channel adapters for receiving and sending messages by using the Advanced Message Queuing Protocol (AMQP).
......@@ -45,7 +45,7 @@ TIP:
You should familiarize yourself with the [reference documentation of the Spring AMQP project](https://docs.spring.io/spring-amqp/reference/html/).
It provides much more in-depth information about Spring’s integration with AMQP in general and RabbitMQ in particular.
### [](#amqp-inbound-channel-adapter)Inbound Channel Adapter
### Inbound Channel Adapter
The following listing shows the possible configuration options for an AMQP Inbound Channel Adapter:
......@@ -171,7 +171,7 @@ XML
Starting with version 5.5, the `AmqpInboundChannelAdapter` can be configured with an `org.springframework.amqp.rabbit.retry.MessageRecoverer` strategy which is used in the `RecoveryCallback` when the retry operation is called internally.
See `setMessageRecoverer()` JavaDocs for more information.
#### [](#amqp-debatching)Batched Messages
#### Batched Messages
See [the Spring AMQP Documentation](https://docs.spring.io/spring-amqp/docs/current/reference/html/#template-batching) for more information about batched messages.
......@@ -185,9 +185,9 @@ The default `BatchingStrategy` is the `SimpleBatchingStrategy`, but this can be
| |The `org.springframework.amqp.rabbit.retry.MessageBatchRecoverer` must be used with batches when recovery is required for retry operations.|
|---|-------------------------------------------------------------------------------------------------------------------------------------------|
### [](#polled-inbound-channel-adapter)Polled Inbound Channel Adapter
### Polled Inbound Channel Adapter
#### [](#overview)Overview
#### Overview
Version 5.0.1 introduced a polled channel adapter, letting you fetch individual messages on demand — for example, with a `MessageSourcePollingTemplate` or a poller.
See [Deferred Acknowledgment Pollable Message Source](./polling-consumer.html#deferred-acks-message-source) for more information.
......@@ -227,13 +227,13 @@ XML
This adapter currently does not have XML configuration support.
```
#### [](#amqp-polled-debatching)Batched Messages
#### Batched Messages
See [Batched Messages](#amqp-debatching).
For the polled adapter, there is no listener container, batched messages are always debatched (if the `BatchingStrategy` supports doing so).
### [](#amqp-inbound-gateway)Inbound Gateway
### Inbound Gateway
The inbound gateway supports all the attributes on the inbound channel adapter (except that 'channel' is replaced by 'request-channel'), plus some additional attributes.
The following listing shows the available attributes:
......@@ -321,11 +321,11 @@ See the note in [Inbound Channel Adapter](#amqp-inbound-channel-adapter) about c
Starting with version 5.5, the `AmqpInboundChannelAdapter` can be configured with an `org.springframework.amqp.rabbit.retry.MessageRecoverer` strategy which is used in the `RecoveryCallback` when the retry operation is called internally.
See `setMessageRecoverer()` JavaDocs for more information.
#### [](#amqp-gateway-debatching)Batched Messages
#### Batched Messages
See [Batched Messages](#amqp-debatching).
### [](#amqp-inbound-ack)Inbound Endpoint Acknowledge Mode
### Inbound Endpoint Acknowledge Mode
By default, the inbound endpoints use the `AUTO` acknowledge mode, which means the container automatically acknowledges the message when the downstream integration flow completes (or a message is handed off to another thread by using a `QueueChannel` or `ExecutorChannel`).
Setting the mode to `NONE` configures the consumer such that acknowledgments are not used at all (the broker automatically acknowledges the message as soon as it is sent).
......@@ -360,7 +360,7 @@ public Object handle(@Payload String payload, @Header(AmqpHeaders.CHANNEL) Chann
}
```
### [](#amqp-outbound-endpoints)Outbound Endpoints
### Outbound Endpoints
The following outbound endpoints have many similar configuration options.
Starting with version 5.2, the `confirm-timeout` has been added.
......@@ -368,7 +368,7 @@ Normally, when publisher confirms are enabled, the broker will quickly return an
If a channel is closed before the confirm is received, the Spring AMQP framework will synthesize a nack.
"Missing" acks should never occur but, if you set this property, the endpoint will periodically check for them and synthesize a nack if the time elapses without a confirm being received.
### [](#amqp-outbound-channel-adapter)Outbound Channel Adapter
### Outbound Channel Adapter
The following example shows the available properties for an AMQP outbound channel adapter:
......@@ -452,7 +452,7 @@ XML
| |return-channel<br/><br/>Using a `return-channel` requires a `RabbitTemplate` with the `mandatory` property set to `true` and a `CachingConnectionFactory` with the `publisherReturns` property set to `true`.<br/>When using multiple outbound endpoints with returns, a separate `RabbitTemplate` is needed for each endpoint.|
|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
### [](#amqp-outbound-gateway)Outbound Gateway
### Outbound Gateway
The following listing shows the possible properties for an AMQP Outbound Gateway:
......@@ -552,7 +552,7 @@ XML
Note that the only difference between the outbound adapter and outbound gateway configuration is the setting of the`expectReply` property.
### [](#amqp-async-outbound-gateway)Asynchronous Outbound Gateway
### Asynchronous Outbound Gateway
The gateway discussed in the previous section is synchronous, in that the sending thread is suspended until a
reply is received (or a timeout occurs).
......@@ -670,7 +670,7 @@ See also [Asynchronous Service Activator](./service-activator.html#async-service
| |RabbitTemplate<br/><br/>When you use confirmations and returns, we recommend that the `RabbitTemplate` wired into the `AsyncRabbitTemplate` be dedicated.<br/>Otherwise, unexpected side-effects may be encountered.|
|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
### [](#alternative-confirms-returns)Alternative Mechanism for Publisher Confirms and Returns
### Alternative Mechanism for Publisher Confirms and Returns
When the connection factory is configured for publisher confirms and returns, the sections above discuss the configuration of message channels to receive the confirms and returns asynchronously.
Starting with version 5.4, there is an additional mechanism which is generally easier to use.
......@@ -699,7 +699,7 @@ catch { ... }
To improve performance, you may wish to send multiple messages and wait for the confirmations later, rather than one-at-a-time.
The returned message is the raw message after conversion; you can subclass `CorrelationData` with whatever additional data you need.
### [](#amqp-conversion-inbound)Inbound Message Conversion
### Inbound Message Conversion
Inbound messages, arriving at the channel adapter or gateway, are converted to the `spring-messaging` `Message<?>` payload using a message converter.
By default, a `SimpleMessageConverter` is used, which handles java serialization and text.
......@@ -712,7 +712,7 @@ If the error flow throws an exception, the exception type, in conjunction with t
If the container is configured with `AcknowledgeMode.MANUAL`, the payload is a `ManualAckListenerExecutionFailedException` with additional properties `channel` and `deliveryTag`.
This enables the error flow to call `basicAck` or `basicNack` (or `basicReject`) for the message, to control its disposition.
### [](#content-type-conversion-outbound)Outbound Message Conversion
### Outbound Message Conversion
Spring AMQP 1.4 introduced the `ContentTypeDelegatingMessageConverter`, where the actual converter is selected based
on the incoming content type message property.
......@@ -752,7 +752,7 @@ This applies to both the outbound channel adapter and gateway.
| |Starting with version 5.0, headers that are added to the `MessageProperties` of the outbound message are never overwritten by mapped headers (by default).<br/>Previously, this was only the case if the message converter was a `ContentTypeDelegatingMessageConverter` (in that case, the header was mapped first so that the proper converter could be selected).<br/>For other converters, such as the `SimpleMessageConverter`, mapped headers overwrote any headers added by the converter.<br/>This caused problems when an outbound message had some leftover `contentType` headers (perhaps from an inbound channel adapter) and the correct outbound `contentType` was incorrectly overwritten.<br/>The work-around was to use a header filter to remove the header before sending the message to the outbound endpoint.<br/><br/>There are, however, cases where the previous behavior is desired — for example, when a `String` payload that contains JSON, the `SimpleMessageConverter` is not aware of the content and sets the `contentType` message property to `text/plain` but your application would like to override that to `application/json` by setting the `contentType` header of the message sent to the outbound endpoint.<br/>The `ObjectToJsonTransformer` does exactly that (by default).<br/><br/>There is now a property called `headersMappedLast` on the outbound channel adapter and gateway (as well as on AMQP-backed channels).<br/>Setting this to `true` restores the behavior of overwriting the property added by the converter.<br/><br/>Starting with version 5.1.9, a similar `replyHeadersMappedLast` is provided for the `AmqpInboundGateway` when we produce a reply and would like to override headers populated by the converter.<br/>See its JavaDocs for more information.|
|---|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
### [](#amqp-user-id)Outbound User ID
### Outbound User ID
Spring AMQP version 1.6 introduced a mechanism to allow the specification of a default user ID for outbound messages.
It has always been possible to set the `AmqpHeaders.USER_ID` header, which now takes precedence over the default.
......@@ -764,7 +764,7 @@ To configure a default user ID for outbound messages, configure it on a `RabbitT
Similarly, to set the user ID property on replies, inject an appropriately configured template into the inbound gateway.
See the [Spring AMQP documentation](https://docs.spring.io/spring-amqp/reference/html/_reference.html#template-user-id) for more information.
### [](#amqp-delay)Delayed Message Exchange
### Delayed Message Exchange
Spring AMQP supports the [RabbitMQ Delayed Message Exchange Plugin](https://docs.spring.io/spring-amqp/reference/html/#delayed-message-exchange).
For inbound messages, the `x-delay` header is mapped to the `AmqpHeaders.RECEIVED_DELAY` header.
......@@ -772,7 +772,7 @@ Setting the `AMQPHeaders.DELAY` header causes the corresponding `x-delay` header
You can also specify the `delay` and `delayExpression` properties on outbound endpoints (`delay-expression` when using XML configuration).
These properties take precedence over the `AmqpHeaders.DELAY` header.
### [](#amqp-channels)AMQP-backed Message Channels
### AMQP-backed Message Channels
There are two message channel implementations available.
One is point-to-point, and the other is publish-subscribe.
......@@ -826,7 +826,7 @@ By default, Spring AMQP `MessageProperties` uses `PERSISTENT` delivery mode.
| |Starting with version 5.0, the pollable channel now blocks the poller thread for the specified `receiveTimeout` (the default is 1 second).<br/>Previously, unlike other `PollableChannel` implementations, the thread returned immediately to the scheduler if no message was available, regardless of the receive timeout.<br/>Blocking is a little more expensive than using a `basicGet()` to retrieve a message (with no timeout), because a consumer has to be created to receive each message.<br/>To restore the previous behavior, set the poller’s `receiveTimeout` to 0.|
|---|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#configuring-with-java-configuration)Configuring with Java Configuration
#### Configuring with Java Configuration
The following example shows how to configure the channels with Java configuration:
......@@ -859,7 +859,7 @@ public AmqpChannelFactoryBean pubSub(ConnectionFactory connectionFactory) {
}
```
#### [](#configuring-with-the-java-dsl)Configuring with the Java DSL
#### Configuring with the Java DSL
The following example shows how to configure the channels with the Java DSL:
......@@ -895,9 +895,9 @@ public IntegrationFlow pubSubInFlow(ConnectionFactory connectionFactory) {
}
```
### [](#amqp-message-headers)AMQP Message Headers
### AMQP Message Headers
#### [](#overview-2)Overview
#### Overview
The Spring Integration AMQP Adapters automatically map all AMQP properties and headers.
(This is a change from 4.3 - previously, only standard headers were mapped).
......@@ -993,7 +993,7 @@ For this purpose a `!json_*` pattern should be configured for header mapper of t
| |Starting with version 5.1, the `DefaultAmqpHeaderMapper` will fall back to mapping `MessageHeaders.ID` and `MessageHeaders.TIMESTAMP` to `MessageProperties.messageId` and `MessageProperties.timestamp` respectively, if the corresponding `amqp_messageId` or `amqp_timestamp` headers are not present on outbound messages.<br/>Inbound properties will be mapped to the `amqp_*` headers as before.<br/>It is useful to populate the `messageId` property when message consumers are using stateful retry.|
|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#amqp-content-type)The `contentType` Header
#### The `contentType` Header
Unlike other headers, the `AmqpHeaders.CONTENT_TYPE` is not prefixed with `amqp_`; this allows transparent passing of the contentType header across different technologies.
For example an inbound HTTP message sent to a RabbitMQ queue.
......@@ -1004,18 +1004,18 @@ Prior to version 5.1, this header was also mapped as an entry in the `MessagePro
Such a change would be reflected in the first-class `content_type` property, but not in the RabbitMQ headers map.
Inbound mapping ignored the headers map value.`contentType` is no longer mapped to an entry in the headers map.
### [](#amqp-strict-ordering)Strict Message Ordering
### Strict Message Ordering
This section describes message ordering for inbound and outbound messages.
#### [](#inbound)Inbound
#### Inbound
If you require strict ordering of inbound messages, you must configure the inbound listener container’s `prefetchCount` property to `1`.
This is because, if a message fails and is redelivered, it arrives after existing prefetched messages.
Since Spring AMQP version 2.0, the `prefetchCount` defaults to `250` for improved performance.
Strict ordering requirements come at the cost of decreased performance.
#### [](#outbound)Outbound
#### Outbound
Consider the following integration flow:
......@@ -1059,7 +1059,7 @@ If the optional timeout is provided, when the flow completes, the advice calls t
| |There must be no thread handoffs in the downstream flow (`QueueChannel`, `ExecutorChannel`, and others).|
|---|--------------------------------------------------------------------------------------------------------|
### [](#amqp-samples)AMQP Samples
### AMQP Samples
To experiment with the AMQP adapters, check out the samples available in the Spring Integration samples git repository at [https://github.com/SpringSource/spring-integration-samples](https://github.com/spring-projects/spring-integration-samples)
......
# Configuration
## [](#configuration)Configuration
## Configuration
Spring Integration offers a number of configuration options.
Which option you choose depends upon your particular needs and at what level you prefer to work.
......@@ -10,7 +10,7 @@ As much as possible, the two provide consistent naming.
The XML elements defined by the XSD schema match the names of the annotations, and the attributes of those XML elements match the names of annotation properties.
You can also use the API directly, but we expect most developers to choose one of the higher-level options or a combination of the namespace-based and annotation-driven configuration.
### [](#configuration-namespace)Namespace Support
### Namespace Support
You can configure Spring Integration components with XML elements that map directly to the terminology and concepts of enterprise integration.
In many cases, the element names match those of the [*Enterprise Integration Patterns*](https://www.enterpriseintegrationpatterns.com/) book.
......@@ -87,7 +87,7 @@ For example, the following root element shows several of these namespace declara
This reference manual provides specific examples of the various elements in their corresponding chapters.
Here, the main thing to recognize is the consistency of the naming for each namespace URI and schema location.
### [](#namespace-taskscheduler)Configuring the Task Scheduler
### Configuring the Task Scheduler
In Spring Integration, the `ApplicationContext` plays the central role of a message bus, and you need to consider only a couple of configuration options.
First, you may want to control the central `TaskScheduler` instance.
......@@ -119,7 +119,7 @@ However, when no task executor is provided for an endpoint’s poller, it is inv
See also [Error Handling](./error-handling.html#error-handling) for more information.
### [](#global-properties)Global Properties
### Global Properties
Certain global framework properties can be overridden by providing a properties file on the classpath.
......@@ -167,7 +167,7 @@ spring.integration.readOnly.headers=
spring.integration.messagingTemplate.throwExceptionOnLateReply=true
```
### [](#annotations)Annotation Support
### Annotation Support
In addition to the XML namespace support for configuring message endpoints, you can also use annotations.
First, Spring Integration provides the class-level `@MessageEndpoint` as a stereotype annotation, meaning that it is itself annotated with Spring’s `@Component` annotation and is therefore automatically recognized as a bean definition by Spring’s component scanning.
......@@ -296,7 +296,7 @@ For these purposes, you should use the `beanName` mentioned earlier in the prece
| |Channels automatically created after parsing the mentioned annotations (when no specific channel bean is configured), and the corresponding consumer endpoints, are declared as beans near the end of the context initialization.<br/>These beans **can** be autowired in other services, but they have to be marked with the `@Lazy` annotation because the definitions, typically, won’t yet be available during normal autowiring processing.<br/><br/>```<br/>@Autowired<br/>@Lazy<br/>@Qualifier("someChannel")<br/>MessageChannel someChannel;<br/>...<br/><br/>@Bean<br/>Thing1 dependsOnSPCA(@Qualifier("someInboundAdapter") @Lazy SourcePollingChannelAdapter someInboundAdapter) {<br/> ...<br/>}<br/>```|
|---|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#configuration-using-poller-annotation)Using the `@Poller` Annotation
#### Using the `@Poller` Annotation
Before Spring Integration 4.0, messaging annotations required that the `inputChannel` be a reference to a `SubscribableChannel`.
For `PollableChannel` instances, an `<int:bridge/>` element was needed to configure an `<int:poller/>` and make the composite endpoint be a `PollingConsumer`.
......@@ -373,7 +373,7 @@ See [Endpoint Namespace Support](./endpoint.html#endpoint-namespace) for more in
The `poller()` attribute on the messaging annotations is mutually exclusive with the `reactive()` attribute.
See next section for more information.
#### [](#configuration-using-reactive-annotation)Using `@Reactive` Annotation
#### Using `@Reactive` Annotation
The `ReactiveStreamsConsumer` has been around since version 5.0, but it was applied only when an input channel for the endpoint is a `FluxMessageChannel` (or any `org.reactivestreams.Publisher` implementation).
Starting with version 5.3, its instance is also created by the framework when the target message handler is a `ReactiveMessageHandler` independently of the input channel type.
......@@ -398,7 +398,7 @@ public void handleReactive(String payload) {
The `reactive()` attribute on the messaging annotations is mutually exclusive with the `poller()` attribute.
See [Using the `@Poller` Annotation](#configuration-using-poller-annotation) and [Reactive Streams Support](./reactive-streams.html#reactive-streams) for more information.
#### [](#using-the-inboundchanneladapter-annotation)Using the `@InboundChannelAdapter` Annotation
#### Using the `@InboundChannelAdapter` Annotation
Version 4.0 introduced the `@InboundChannelAdapter` method-level annotation.
It produces a `SourcePollingChannelAdapter` integration component based on a `MethodInvokingMessageSource` for the annotated method.
......@@ -430,7 +430,7 @@ Using the `@MessagingGateway` Annotation
See [`@MessagingGateway` Annotation](./gateway.html#messaging-gateway-annotation).
#### [](#using-the-integrationcomponentscan-annotation)Using the `@IntegrationComponentScan` Annotation
#### Using the `@IntegrationComponentScan` Annotation
The standard Spring Framework `@ComponentScan` annotation does not scan interfaces for stereotype `@Component` annotations.
To overcome this limitation and allow the configuration of `@MessagingGateway` (see [`@MessagingGateway` Annotation](./gateway.html#messaging-gateway-annotation)), we introduced the `@IntegrationComponentScan` mechanism.
......@@ -439,7 +439,7 @@ such as `basePackages` and `basePackageClasses`.
In this case, all discovered interfaces annotated with `@MessagingGateway` are parsed and registered as `GatewayProxyFactoryBean` instances.
All other class-based components are parsed by the standard `@ComponentScan`.
### [](#meta-annotations)Messaging Meta-Annotations
### Messaging Meta-Annotations
Starting with version 4.0, all messaging annotations can be configured as meta-annotations and all user-defined messaging annotations can define the same attributes to override their default values.
In addition, meta-annotations can be configured hierarchically, as the following example shows:
......@@ -473,7 +473,7 @@ public Object service(Object payload) {
Configuring meta-annotations hierarchically lets users set defaults for various attributes and enables isolation of framework Java dependencies to user annotations, avoiding their use in user classes.
If the framework finds a method with a user annotation that has a framework meta-annotation, it is treated as if the method were annotated directly with the framework annotation.
#### [](#annotations_on_beans)Annotations on `@Bean` Methods
#### Annotations on `@Bean` Methods
Starting with version 4.0, you can configure messaging annotations on `@Bean` method definitions in `@Configuration` classes, to produce message endpoints based on the beans, not the methods.
It is useful when `@Bean` definitions are “out-of-the-box” `MessageHandler` instances (`AggregatingMessageHandler`, `DefaultMessageSplitter`, and others), `Transformer` instances (`JsonToObjectTransformer`, `ClaimCheckOutTransformer`, and others), and `MessageSource` instances (`FileReadingMessageSource`, `RedisStoreMessageSource`, and others).
......@@ -557,7 +557,7 @@ The meta-annotation rules work on `@Bean` methods as well (the `@MyServiceActiva
| |With Java configuration, you can use any `@Conditional` (for example, `@Profile`) definition on the `@Bean` method level to skip the bean registration for some conditional reason.<br/>The following example shows how to do so:<br/><br/>```<br/>@Bean<br/>@ServiceActivator(inputChannel = "skippedChannel")<br/>@Profile("thing")<br/>public MessageHandler skipped() {<br/> return System.out::println;<br/>}<br/>```<br/><br/>Together with the existing Spring container logic, the messaging endpoint bean (based on the `@ServiceActivator` annotation), is also not registered.|
|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
#### [](#creating-a-bridge-with-annotations)Creating a Bridge with Annotations
#### Creating a Bridge with Annotations
Starting with version 4.0, Java configuration provides the `@BridgeFrom` and `@BridgeTo` `@Bean` method annotations to mark `MessageChannel` beans in `@Configuration` classes.
These really exists for completeness, providing a convenient mechanism to declare a `BridgeHandler` and its message endpoint configuration:
......@@ -587,16 +587,16 @@ public MessageChannel bridgeToInput() {
You can use these annotations as meta-annotations as well.
#### [](#advising-annotated-endpoints)Advising Annotated Endpoints
#### Advising Annotated Endpoints
See [Advising Endpoints Using Annotations](./handler-advice.html#advising-with-annotations).
### [](#message-mapping-rules)Message Mapping Rules and Conventions
### Message Mapping Rules and Conventions
Spring Integration implements a flexible facility to map messages to methods and their arguments without providing extra configuration, by relying on some default rules and defining certain conventions.
The examples in the following sections articulate the rules.
#### [](#sample-scenarios)Sample Scenarios
#### Sample Scenarios
The following example shows a single un-annotated parameter (object or primitive) that is not a `Map` or a `Properties` object with a non-void return type:
......@@ -674,7 +674,7 @@ public void soSomething();
This example is the same as the previous example, but it produces no output.
#### [](#annotation-based-mapping)Annotation-based Mapping
#### Annotation-based Mapping
Annotation-based mapping is the safest and least ambiguous approach to map messages to methods.
The following example shows how to explicitly map a method to a header:
......@@ -715,7 +715,7 @@ However, with annotation-based mapping, the ambiguity is easily avoided.
In this example the first argument is mapped to all the message headers, while the second and third argument map to the values of the message headers named 'something' and 'someotherthing'.
The payload is not being mapped to any argument.
#### [](#complex-scenarios)Complex Scenarios
#### Complex Scenarios
The following example uses multiple parameters:
......
此差异已折叠。
此差异已折叠。
# Integration Endpoints
## [](#endpoint-summary)Endpoint Quick Reference Table
## Endpoint Quick Reference Table
As discussed in the earlier sections, Spring Integration provides a number of endpoints used to interface with external systems, file systems, and others.
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册