提交 cc9802e2 编写于 作者: 茶陵後's avatar 茶陵後 👍

#1 添加英文原文

上级 2c5f25dc
此差异已折叠。
# Spring 中文文档社区
从实操的角度整理翻译`Spring`相关文档,包括`快速开始``安装指南``开发工具配置``代码案例`等。
## 声明
本站中的文章内容来源于 [Spring.io](https://spring.io/),原始版权归属于 [Spring.io](https://spring.io/)。本站对相关文章进行了翻译及整理。本站文章可供个人学习、研究或者欣赏之用,未经本站事先书面许可,不得进行任何转载、商用或与之相关的行为。
商标声明:Spring 是 Pivotal Software, Inc. 在美国以及其他国家的商标。
## 文档列表
- [Spring](/why-spring.html)
- [Spring Boot](/spring-boot/getting-help.html)
- [Spring Framework](/spring-framework/overview.html)
- [Spring Data](/spring-data/spring-data.html)
- [Spring Cloud](/spring-cloud/documentation-overview.html)
- [Spring Cloud Data Flow](/spring-cloud-data-flow/spring-cloud-dataflow.html)
- [Spring Security](/spring-security/overview.html)
- [Spring for GraphQL](/spring-for-graphql/spring-graphql.html)
- [Spring Session](/spring-session/_index.html)
- [Spring Integration](/spring-integration/preface.html)
- [Spring HATEOAS](/spring-hateoas/spring-hateoas.html)
- [Spring REST Docs](/spring-rest-docs/spring-restdocs.html)
- [Spring Batch](/spring-batch/spring-batch-intro.html)
- [Spring AMQP](/spring-amqp/spring-amqp.html)
- [Spring CredHub](/spring-credhub/spring-credhub.html)
- [Spring Flo](/spring-flo/spring-flo.html)
- [Spring for Apache Kafka](/spring-for-apache-kafka/spring-kafka.html)
- [Spring LDAP](/spring-ldap/spring-ldap.html)
- [Spring Shell](/spring-shell/spring-shell.html)
- [Spring Statemachine](/spring-statemachine/spring-statemachine.html)
- [Spring Vault](/spring-vault/spring-vault.html)
- [Spring Web Flow](/spring-web-flow/preface.html)
- [Spring Web Services](/spring-web-services/spring-web-service.html)
## 参与贡献流程
所有 **`Java Spring 熟练使用者`** 可以参与到`Spring 中文文档社区`的建设中来,选择自己感兴趣题目,可以直接撰写文章,也可以翻译 [Spring 官方](https://spring.io/) 上面的内容,也可以校对别人翻译的文章。具体贡献流程如下。
![](./readme/readme-1.png)
### 1. 阅读文档帮助改善
[`Spring 中文文档社区`](https://spring.gitcode.net)上浏览某一篇文档时,发现有不准确的地方,可以`随时`在该页面的左下方点击`在 GitCode 上编辑此页`
![](./readme/readme-2.png)
### 2. 在 GitCode 校对/创作
进入GitCode之后,会自动定位到你想要修改的文件,修改文件内容。
#### 2-1. 仓库的成员
如果是仓库的成员,点击`“编辑”按钮,会直接进入可编辑的状态,供你修改文件内容。
![](./readme/readme-3.png)
![](./readme/readme-4.png)
#### 2-2. 非仓库的成员
如果是非仓库的成员,点击`“编辑”`,GitCode 会提醒你没有权限编辑,可以点击`Fork`按钮,将该项目克隆到你的 GitCode 账户下。
![](./readme/readme-5.png)
### 3. 内容编辑完成提交PR
内容编辑完成者向[此仓库](https://gitcode.net/dev-cloud/spring-docs)提交 PR(Pull Request)。
### 4. 审核
[主仓库](https://gitcode.net/dev-cloud/spring-docs) 管理者会 Review,符合要求的,即会 Merge 到[主仓库](https://gitcode.net/dev-cloud/spring-docs)中。
### 5. 查看更新
Merge 成功之后,稍等片刻就可以刷新页面查看更新。
此差异已折叠。
## Appendix A: List of ItemReaders and ItemWriters
### Item Readers
| Item Reader | Description |
|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|AbstractItemCountingItemStreamItemReader| Abstract base class that provides basic<br/>restart capabilities by counting the number of items returned from<br/>an `ItemReader`. |
| AggregateItemReader |An `ItemReader` that delivers a list as its<br/>item, storing up objects from the injected `ItemReader` until they<br/>are ready to be packed out as a collection. This class must be used<br/>as a wrapper for a custom `ItemReader` that can identify the record<br/>boundaries. The custom reader should mark the beginning and end of<br/>records by returning an `AggregateItem` which responds `true` to its<br/>query methods `isHeader()` and `isFooter()`. Note that this reader<br/>is not part of the library of readers provided by Spring Batch<br/>but given as a sample in `spring-batch-samples`.|
| AmqpItemReader | Given a Spring `AmqpTemplate`, it provides<br/>synchronous receive methods. The `receiveAndConvert()` method<br/>lets you receive POJO objects. |
| KafkaItemReader | An `ItemReader` that reads messages from an Apache Kafka topic.<br/>It can be configured to read messages from multiple partitions of the same topic.<br/>This reader stores message offsets in the execution context to support restart capabilities. |
| FlatFileItemReader | Reads from a flat file. Includes `ItemStream`and `Skippable` functionality. See [`FlatFileItemReader`](readersAndWriters.html#flatFileItemReader). |
| HibernateCursorItemReader | Reads from a cursor based on an HQL query. See[`Cursor-based ItemReaders`](readersAndWriters.html#cursorBasedItemReaders). |
| HibernatePagingItemReader | Reads from a paginated HQL query |
| ItemReaderAdapter | Adapts any class to the`ItemReader` interface. |
| JdbcCursorItemReader | Reads from a database cursor via JDBC. See[`Cursor-based ItemReaders`](readersAndWriters.html#cursorBasedItemReaders). |
| JdbcPagingItemReader | Given an SQL statement, pages through the rows,<br/>such that large datasets can be read without running out of<br/>memory. |
| JmsItemReader | Given a Spring `JmsOperations` object and a JMS<br/>Destination or destination name to which to send errors, provides items<br/>received through the injected `JmsOperations#receive()`method. |
| JpaPagingItemReader | Given a JPQL statement, pages through the<br/>rows, such that large datasets can be read without running out of<br/>memory. |
| ListItemReader | Provides the items from a list, one at a<br/>time. |
| MongoItemReader | Given a `MongoOperations` object and a JSON-based MongoDB<br/>query, provides items received from the `MongoOperations#find()` method. |
| Neo4jItemReader | Given a `Neo4jOperations` object and the components of a<br/>Cyhper query, items are returned as the result of the Neo4jOperations.query<br/>method. |
| RepositoryItemReader | Given a Spring Data `PagingAndSortingRepository` object,<br/>a `Sort`, and the name of method to execute, returns items provided by the<br/>Spring Data repository implementation. |
| StoredProcedureItemReader | Reads from a database cursor resulting from the<br/>execution of a database stored procedure. See [`StoredProcedureItemReader`](readersAndWriters.html#StoredProcedureItemReader) |
| StaxEventItemReader | Reads via StAX. see [`StaxEventItemReader`](readersAndWriters.html#StaxEventItemReader). |
| JsonItemReader | Reads items from a Json document. see [`JsonItemReader`](readersAndWriters.html#JsonItemReader). |
### Item Writers
| Item Writer | Description |
|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| AbstractItemStreamItemWriter | Abstract base class that combines the`ItemStream` and`ItemWriter` interfaces. |
| AmqpItemWriter | Given a Spring `AmqpTemplate`, it provides<br/>for a synchronous `send` method. The `convertAndSend(Object)`method lets you send POJO objects. |
| CompositeItemWriter | Passes an item to the `write` method of each<br/>in an injected `List` of `ItemWriter` objects. |
| FlatFileItemWriter | Writes to a flat file. Includes `ItemStream` and<br/>Skippable functionality. See [`FlatFileItemWriter`](readersAndWriters.html#flatFileItemWriter). |
| GemfireItemWriter | Using a `GemfireOperations` object, items are either written<br/>or removed from the Gemfire instance based on the configuration of the delete<br/>flag. |
| HibernateItemWriter | This item writer is Hibernate-session aware<br/>and handles some transaction-related work that a non-"hibernate-aware"<br/>item writer would not need to know about and then delegates<br/>to another item writer to do the actual writing. |
| ItemWriterAdapter | Adapts any class to the`ItemWriter` interface. |
| JdbcBatchItemWriter | Uses batching features from a`PreparedStatement`, if available, and can<br/>take rudimentary steps to locate a failure during a`flush`. |
| JmsItemWriter | Using a `JmsOperations` object, items are written<br/>to the default queue through the `JmsOperations#convertAndSend()` method. |
| JpaItemWriter | This item writer is JPA EntityManager-aware<br/>and handles some transaction-related work that a non-"JPA-aware"`ItemWriter` would not need to know about and<br/>then delegates to another writer to do the actual writing. |
| KafkaItemWriter |Using a `KafkaTemplate` object, items are written to the default topic through the`KafkaTemplate#sendDefault(Object, Object)` method using a `Converter` to map the key from the item.<br/>A delete flag can also be configured to send delete events to the topic.|
| MimeMessageItemWriter | Using Spring’s `JavaMailSender`, items of type `MimeMessage`are sent as mail messages. |
| MongoItemWriter | Given a `MongoOperations` object, items are written<br/>through the `MongoOperations.save(Object)` method. The actual write is delayed<br/>until the last possible moment before the transaction commits. |
| Neo4jItemWriter | Given a `Neo4jOperations` object, items are persisted through the`save(Object)` method or deleted through the `delete(Object)` per the`ItemWriter’s` configuration |
|PropertyExtractingDelegatingItemWriter| Extends `AbstractMethodInvokingDelegator`creating arguments on the fly. Arguments are created by retrieving<br/>the values from the fields in the item to be processed (through a`SpringBeanWrapper`), based on an injected array of field<br/>names. |
| RepositoryItemWriter | Given a Spring Data `CrudRepository` implementation,<br/>items are saved through the method specified in the configuration. |
| StaxEventItemWriter | Uses a `Marshaller` implementation to<br/>convert each item to XML and then writes it to an XML file using<br/>StAX. |
| JsonFileItemWriter | Uses a `JsonObjectMarshaller` implementation to<br/>convert each item to Json and then writes it to an Json file.
\ No newline at end of file
此差异已折叠。
此差异已折叠。
# Glossary
## Appendix A: Glossary
### Spring Batch Glossary
Batch
An accumulation of business transactions over time.
Batch Application Style
Term used to designate batch as an application style in its own right, similar to
online, Web, or SOA. It has standard elements of input, validation, transformation of
information to business model, business processing, and output. In addition, it
requires monitoring at a macro level.
Batch Processing
The handling of a batch of many business transactions that have accumulated over a
period of time (such as an hour, a day, a week, a month, or a year). It is the
application of a process or set of processes to many data entities or objects in a
repetitive and predictable fashion with either no manual element or a separate manual
element for error processing.
Batch Window
The time frame within which a batch job must complete. This can be constrained by other
systems coming online, other dependent jobs needing to execute, or other factors
specific to the batch environment.
Step
The main batch task or unit of work. It initializes the business logic and controls the
transaction environment, based on commit interval setting and other factors.
Tasklet
A component created by an application developer to process the business logic for a
Step.
Batch Job Type
Job types describe application of jobs for particular types of processing. Common areas
are interface processing (typically flat files), forms processing (either for online
PDF generation or print formats), and report processing.
Driving Query
A driving query identifies the set of work for a job to do. The job then breaks that
work into individual units of work. For instance, a driving query might be to identify
all financial transactions that have a status of "pending transmission" and send them
to a partner system. The driving query returns a set of record IDs to process. Each
record ID then becomes a unit of work. A driving query may involve a join (if the
criteria for selection falls across two or more tables) or it may work with a single
table.
Item
An item represents the smallest amount of complete data for processing. In the simplest
terms, this might be a line in a file, a row in a database table, or a particular
element in an XML file.
Logical Unit of Work (LUW)
A batch job iterates through a driving query (or other input source, such as a file) to
perform the set of work that the job must accomplish. Each iteration of work performed
is a unit of work.
Commit Interval
A set of LUWs processed within a single transaction.
Partitioning
Splitting a job into multiple threads where each thread is responsible for a subset of
the overall data to be processed. The threads of execution may be within the same JVM
or they may span JVMs in a clustered environment that supports workload balancing.
Staging Table
A table that holds temporary data while it is being processed.
Restartable
A job that can be executed again and assumes the same identity as when run initially.
In other words, it is has the same job instance ID.
Rerunnable
A job that is restartable and manages its own state in terms of the previous run’s
record processing. An example of a rerunnable step is one based on a driving query. If
the driving query can be formed so that it limits the processed rows when the job is
restarted, then it is re-runnable. This is managed by the application logic. Often, a
condition is added to the `where` statement to limit the rows returned by the driving
query with logic resembling "and processedFlag!= true".
Repeat
One of the most basic units of batch processing, it defines by repeatability calling a
portion of code until it is finished and while there is no error. Typically, a batch
process would be repeatable as long as there is input.
Retry
Simplifies the execution of operations with retry semantics most frequently associated
with handling transactional output exceptions. Retry is slightly different from repeat,
rather than continually calling a block of code, retry is stateful and continually
calls the same block of code with the same input, until it either succeeds or some type
of retry limit has been exceeded. It is only generally useful when a subsequent
invocation of the operation might succeed because something in the environment has
improved.
Recover
Recover operations handle an exception in such a way that a repeat process is able to
continue.
Skip
Skip is a recovery strategy often used on file input sources as the strategy for
ignoring bad input records that failed validation.
\ No newline at end of file
此差异已折叠。
此差异已折叠。
# Monitoring and metrics
## Monitoring and metrics
Since version 4.2, Spring Batch provides support for batch monitoring and metrics
based on [Micrometer](https://micrometer.io/). This section describes
which metrics are provided out-of-the-box and how to contribute custom metrics.
### Built-in metrics
Metrics collection does not require any specific configuration. All metrics provided
by the framework are registered in[Micrometer’s global registry](https://micrometer.io/docs/concepts#_global_registry)under the `spring.batch` prefix. The following table explains all the metrics in details:
| *Metric Name* | *Type* | *Description* | *Tags* |
|---------------------------|-----------------|---------------------------|---------------------------------|
| `spring.batch.job` | `TIMER` | Duration of job execution | `name`, `status` |
| `spring.batch.job.active` |`LONG_TASK_TIMER`| Currently active jobs | `name` |
| `spring.batch.step` | `TIMER` |Duration of step execution | `name`, `job.name`, `status` |
| `spring.batch.item.read` | `TIMER` | Duration of item reading |`job.name`, `step.name`, `status`|
|`spring.batch.item.process`| `TIMER` |Duration of item processing|`job.name`, `step.name`, `status`|
|`spring.batch.chunk.write` | `TIMER` | Duration of chunk writing |`job.name`, `step.name`, `status`|
| |The `status` tag can be either `SUCCESS` or `FAILURE`.|
|---|------------------------------------------------------|
### Custom metrics
If you want to use your own metrics in your custom components, we recommend using
Micrometer APIs directly. The following is an example of how to time a `Tasklet`:
```
import io.micrometer.core.instrument.Metrics;
import io.micrometer.core.instrument.Timer;
import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;
public class MyTimedTasklet implements Tasklet {
@Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) {
Timer.Sample sample = Timer.start(Metrics.globalRegistry);
String status = "success";
try {
// do some work
} catch (Exception e) {
// handle exception
status = "failure";
} finally {
sample.stop(Timer.builder("my.tasklet.timer")
.description("Duration of MyTimedTasklet")
.tag("status", status)
.register(Metrics.globalRegistry));
}
return RepeatStatus.FINISHED;
}
}
```
### Disabling metrics
Metrics collection is a concern similar to logging. Disabling logs is typically
done by configuring the logging library and this is no different for metrics.
There is no feature in Spring Batch to disable micrometer’s metrics, this should
be done on micrometer’s side. Since Spring Batch stores metrics in the global
registry of micrometer with the `spring.batch` prefix, it is possible to configure
micrometer to ignore/deny batch metrics with the following snippet:
```
Metrics.globalRegistry.config().meterFilter(MeterFilter.denyNameStartsWith("spring.batch"))
```
Please refer to micrometer’s [reference documentation](http://micrometer.io/docs/concepts#_meter_filters)for more details.
\ No newline at end of file
# Item processing
## Item processing
XMLJavaBoth
The [ItemReader and ItemWriter interfaces](readersAndWriters.html#readersAndWriters) are both very useful for their specific
tasks, but what if you want to insert business logic before writing? One option for both
reading and writing is to use the composite pattern: Create an `ItemWriter` that contains
another `ItemWriter` or an `ItemReader` that contains another `ItemReader`. The following
code shows an example:
```
public class CompositeItemWriter<T> implements ItemWriter<T> {
ItemWriter<T> itemWriter;
public CompositeItemWriter(ItemWriter<T> itemWriter) {
this.itemWriter = itemWriter;
}
public void write(List<? extends T> items) throws Exception {
//Add business logic here
itemWriter.write(items);
}
public void setDelegate(ItemWriter<T> itemWriter){
this.itemWriter = itemWriter;
}
}
```
The preceding class contains another `ItemWriter` to which it delegates after having
provided some business logic. This pattern could easily be used for an `ItemReader` as
well, perhaps to obtain more reference data based upon the input that was provided by the
main `ItemReader`. It is also useful if you need to control the call to `write` yourself.
However, if you only want to 'transform' the item passed in for writing before it is
actually written, you need not `write` yourself. You can just modify the item. For this
scenario, Spring Batch provides the `ItemProcessor` interface, as shown in the following
interface definition:
```
public interface ItemProcessor<I, O> {
O process(I item) throws Exception;
}
```
An `ItemProcessor` is simple. Given one object, transform it and return another. The
provided object may or may not be of the same type. The point is that business logic may
be applied within the process, and it is completely up to the developer to create that
logic. An `ItemProcessor` can be wired directly into a step. For example, assume an`ItemReader` provides a class of type `Foo` and that it needs to be converted to type `Bar`before being written out. The following example shows an `ItemProcessor` that performs
the conversion:
```
public class Foo {}
public class Bar {
public Bar(Foo foo) {}
}
public class FooProcessor implements ItemProcessor<Foo, Bar> {
public Bar process(Foo foo) throws Exception {
//Perform simple transformation, convert a Foo to a Bar
return new Bar(foo);
}
}
public class BarWriter implements ItemWriter<Bar> {
public void write(List<? extends Bar> bars) throws Exception {
//write bars
}
}
```
In the preceding example, there is a class `Foo`, a class `Bar`, and a class`FooProcessor` that adheres to the `ItemProcessor` interface. The transformation is
simple, but any type of transformation could be done here. The `BarWriter` writes `Bar`objects, throwing an exception if any other type is provided. Similarly, the`FooProcessor` throws an exception if anything but a `Foo` is provided. The`FooProcessor` can then be injected into a `Step`, as shown in the following example:
XML Configuration
```
<job id="ioSampleJob">
<step name="step1">
<tasklet>
<chunk reader="fooReader" processor="fooProcessor" writer="barWriter"
commit-interval="2"/>
</tasklet>
</step>
</job>
```
Java Configuration
```
@Bean
public Job ioSampleJob() {
return this.jobBuilderFactory.get("ioSampleJob")
.start(step1())
.build();
}
@Bean
public Step step1() {
return this.stepBuilderFactory.get("step1")
.<Foo, Bar>chunk(2)
.reader(fooReader())
.processor(fooProcessor())
.writer(barWriter())
.build();
}
```
A difference between `ItemProcessor` and `ItemReader` or `ItemWriter` is that an `ItemProcessor`is optional for a `Step`.
### Chaining ItemProcessors
Performing a single transformation is useful in many scenarios, but what if you want to
'chain' together multiple `ItemProcessor` implementations? This can be accomplished using
the composite pattern mentioned previously. To update the previous, single
transformation, example, `Foo` is transformed to `Bar`, which is transformed to `Foobar`and written out, as shown in the following example:
```
public class Foo {}
public class Bar {
public Bar(Foo foo) {}
}
public class Foobar {
public Foobar(Bar bar) {}
}
public class FooProcessor implements ItemProcessor<Foo, Bar> {
public Bar process(Foo foo) throws Exception {
//Perform simple transformation, convert a Foo to a Bar
return new Bar(foo);
}
}
public class BarProcessor implements ItemProcessor<Bar, Foobar> {
public Foobar process(Bar bar) throws Exception {
return new Foobar(bar);
}
}
public class FoobarWriter implements ItemWriter<Foobar>{
public void write(List<? extends Foobar> items) throws Exception {
//write items
}
}
```
A `FooProcessor` and a `BarProcessor` can be 'chained' together to give the resultant`Foobar`, as shown in the following example:
```
CompositeItemProcessor<Foo,Foobar> compositeProcessor =
new CompositeItemProcessor<Foo,Foobar>();
List itemProcessors = new ArrayList();
itemProcessors.add(new FooProcessor());
itemProcessors.add(new BarProcessor());
compositeProcessor.setDelegates(itemProcessors);
```
Just as with the previous example, the composite processor can be configured into the`Step`:
XML Configuration
```
<job id="ioSampleJob">
<step name="step1">
<tasklet>
<chunk reader="fooReader" processor="compositeItemProcessor" writer="foobarWriter"
commit-interval="2"/>
</tasklet>
</step>
</job>
<bean id="compositeItemProcessor"
class="org.springframework.batch.item.support.CompositeItemProcessor">
<property name="delegates">
<list>
<bean class="..FooProcessor" />
<bean class="..BarProcessor" />
</list>
</property>
</bean>
```
Java Configuration
```
@Bean
public Job ioSampleJob() {
return this.jobBuilderFactory.get("ioSampleJob")
.start(step1())
.build();
}
@Bean
public Step step1() {
return this.stepBuilderFactory.get("step1")
.<Foo, Foobar>chunk(2)
.reader(fooReader())
.processor(compositeProcessor())
.writer(foobarWriter())
.build();
}
@Bean
public CompositeItemProcessor compositeProcessor() {
List<ItemProcessor> delegates = new ArrayList<>(2);
delegates.add(new FooProcessor());
delegates.add(new BarProcessor());
CompositeItemProcessor processor = new CompositeItemProcessor();
processor.setDelegates(delegates);
return processor;
}
```
### Filtering Records
One typical use for an item processor is to filter out records before they are passed to
the `ItemWriter`. Filtering is an action distinct from skipping. Skipping indicates that
a record is invalid, while filtering simply indicates that a record should not be
written.
For example, consider a batch job that reads a file containing three different types of
records: records to insert, records to update, and records to delete. If record deletion
is not supported by the system, then we would not want to send any "delete" records to
the `ItemWriter`. But, since these records are not actually bad records, we would want to
filter them out rather than skip them. As a result, the `ItemWriter` would receive only
"insert" and "update" records.
To filter a record, you can return `null` from the `ItemProcessor`. The framework detects
that the result is `null` and avoids adding that item to the list of records delivered to
the `ItemWriter`. As usual, an exception thrown from the `ItemProcessor` results in a
skip.
### Validating Input
In the [ItemReaders and ItemWriters](readersAndWriters.html#readersAndWriters) chapter, multiple approaches to parsing input have been
discussed. Each major implementation throws an exception if it is not 'well-formed'. The`FixedLengthTokenizer` throws an exception if a range of data is missing. Similarly,
attempting to access an index in a `RowMapper` or `FieldSetMapper` that does not exist or
is in a different format than the one expected causes an exception to be thrown. All of
these types of exceptions are thrown before `read` returns. However, they do not address
the issue of whether or not the returned item is valid. For example, if one of the fields
is an age, it obviously cannot be negative. It may parse correctly, because it exists and
is a number, but it does not cause an exception. Since there are already a plethora of
validation frameworks, Spring Batch does not attempt to provide yet another. Rather, it
provides a simple interface, called `Validator`, that can be implemented by any number of
frameworks, as shown in the following interface definition:
```
public interface Validator<T> {
void validate(T value) throws ValidationException;
}
```
The contract is that the `validate` method throws an exception if the object is invalid
and returns normally if it is valid. Spring Batch provides an out of the box`ValidatingItemProcessor`, as shown in the following bean definition:
XML Configuration
```
<bean class="org.springframework.batch.item.validator.ValidatingItemProcessor">
<property name="validator" ref="validator" />
</bean>
<bean id="validator" class="org.springframework.batch.item.validator.SpringValidator">
<property name="validator">
<bean class="org.springframework.batch.sample.domain.trade.internal.validator.TradeValidator"/>
</property>
</bean>
```
Java Configuration
```
@Bean
public ValidatingItemProcessor itemProcessor() {
ValidatingItemProcessor processor = new ValidatingItemProcessor();
processor.setValidator(validator());
return processor;
}
@Bean
public SpringValidator validator() {
SpringValidator validator = new SpringValidator();
validator.setValidator(new TradeValidator());
return validator;
}
```
You can also use the `BeanValidatingItemProcessor` to validate items annotated with
the Bean Validation API (JSR-303) annotations. For example, given the following type `Person`:
```
class Person {
@NotEmpty
private String name;
public Person(String name) {
this.name = name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
```
you can validate items by declaring a `BeanValidatingItemProcessor` bean in your
application context and register it as a processor in your chunk-oriented step:
```
@Bean
public BeanValidatingItemProcessor<Person> beanValidatingItemProcessor() throws Exception {
BeanValidatingItemProcessor<Person> beanValidatingItemProcessor = new BeanValidatingItemProcessor<>();
beanValidatingItemProcessor.setFilter(true);
return beanValidatingItemProcessor;
}
```
### Fault Tolerance
When a chunk is rolled back, items that have been cached during reading may be
reprocessed. If a step is configured to be fault tolerant (typically by using skip or
retry processing), any `ItemProcessor` used should be implemented in a way that is
idempotent. Typically that would consist of performing no changes on the input item for
the `ItemProcessor` and only updating the
instance that is the result.
\ No newline at end of file
此差异已折叠。
# Repeat
## Repeat
XMLJavaBoth
### RepeatTemplate
Batch processing is about repetitive actions, either as a simple optimization or as part
of a job. To strategize and generalize the repetition and to provide what amounts to an
iterator framework, Spring Batch has the `RepeatOperations` interface. The`RepeatOperations` interface has the following definition:
```
public interface RepeatOperations {
RepeatStatus iterate(RepeatCallback callback) throws RepeatException;
}
```
The callback is an interface, shown in the following definition, that lets you insert
some business logic to be repeated:
```
public interface RepeatCallback {
RepeatStatus doInIteration(RepeatContext context) throws Exception;
}
```
The callback is executed repeatedly until the implementation determines that the
iteration should end. The return value in these interfaces is an enumeration that can
either be `RepeatStatus.CONTINUABLE` or `RepeatStatus.FINISHED`. A `RepeatStatus`enumeration conveys information to the caller of the repeat operations about whether
there is any more work to do. Generally speaking, implementations of `RepeatOperations`should inspect the `RepeatStatus` and use it as part of the decision to end the
iteration. Any callback that wishes to signal to the caller that there is no more work to
do can return `RepeatStatus.FINISHED`.
The simplest general purpose implementation of `RepeatOperations` is `RepeatTemplate`, as
shown in the following example:
```
RepeatTemplate template = new RepeatTemplate();
template.setCompletionPolicy(new SimpleCompletionPolicy(2));
template.iterate(new RepeatCallback() {
public RepeatStatus doInIteration(RepeatContext context) {
// Do stuff in batch...
return RepeatStatus.CONTINUABLE;
}
});
```
In the preceding example, we return `RepeatStatus.CONTINUABLE`, to show that there is
more work to do. The callback can also return `RepeatStatus.FINISHED`, to signal to the
caller that there is no more work to do. Some iterations can be terminated by
considerations intrinsic to the work being done in the callback. Others are effectively
infinite loops as far as the callback is concerned and the completion decision is
delegated to an external policy, as in the case shown in the preceding example.
#### RepeatContext
The method parameter for the `RepeatCallback` is a `RepeatContext`. Many callbacks ignore
the context. However, if necessary, it can be used as an attribute bag to store transient
data for the duration of the iteration. After the `iterate` method returns, the context
no longer exists.
If there is a nested iteration in progress, a `RepeatContext` has a parent context. The
parent context is occasionally useful for storing data that need to be shared between
calls to `iterate`. This is the case, for instance, if you want to count the number of
occurrences of an event in the iteration and remember it across subsequent calls.
#### RepeatStatus
`RepeatStatus` is an enumeration used by Spring Batch to indicate whether processing has
finished. It has two possible `RepeatStatus` values, described in the following table:
| *Value* | *Description* |
|-----------|--------------------------------------|
|CONTINUABLE| There is more work to do. |
| FINISHED |No more repetitions should take place.|
`RepeatStatus` values can also be combined with a logical AND operation by using the`and()` method in `RepeatStatus`. The effect of this is to do a logical AND on the
continuable flag. In other words, if either status is `FINISHED`, then the result is`FINISHED`.
### Completion Policies
Inside a `RepeatTemplate`, the termination of the loop in the `iterate` method is
determined by a `CompletionPolicy`, which is also a factory for the `RepeatContext`. The`RepeatTemplate` has the responsibility to use the current policy to create a`RepeatContext` and pass that in to the `RepeatCallback` at every stage in the iteration.
After a callback completes its `doInIteration`, the `RepeatTemplate` has to make a call
to the `CompletionPolicy` to ask it to update its state (which will be stored in the`RepeatContext`). Then it asks the policy if the iteration is complete.
Spring Batch provides some simple general purpose implementations of `CompletionPolicy`.`SimpleCompletionPolicy` allows execution up to a fixed number of times (with`RepeatStatus.FINISHED` forcing early completion at any time).
Users might need to implement their own completion policies for more complicated
decisions. For example, a batch processing window that prevents batch jobs from executing
once the online systems are in use would require a custom policy.
### Exception Handling
If there is an exception thrown inside a `RepeatCallback`, the `RepeatTemplate` consults
an `ExceptionHandler`, which can decide whether or not to re-throw the exception.
The following listing shows the `ExceptionHandler` interface definition:
```
public interface ExceptionHandler {
void handleException(RepeatContext context, Throwable throwable)
throws Throwable;
}
```
A common use case is to count the number of exceptions of a given type and fail when a
limit is reached. For this purpose, Spring Batch provides the`SimpleLimitExceptionHandler` and a slightly more flexible`RethrowOnThresholdExceptionHandler`. The `SimpleLimitExceptionHandler` has a limit
property and an exception type that should be compared with the current exception. All
subclasses of the provided type are also counted. Exceptions of the given type are
ignored until the limit is reached, and then they are rethrown. Exceptions of other types
are always rethrown.
An important optional property of the `SimpleLimitExceptionHandler` is the boolean flag
called `useParent`. It is `false` by default, so the limit is only accounted for in the
current `RepeatContext`. When set to `true`, the limit is kept across sibling contexts in
a nested iteration (such as a set of chunks inside a step).
### Listeners
Often, it is useful to be able to receive additional callbacks for cross-cutting concerns
across a number of different iterations. For this purpose, Spring Batch provides the`RepeatListener` interface. The `RepeatTemplate` lets users register `RepeatListener`implementations, and they are given callbacks with the `RepeatContext` and `RepeatStatus`where available during the iteration.
The `RepeatListener` interface has the following definition:
```
public interface RepeatListener {
void before(RepeatContext context);
void after(RepeatContext context, RepeatStatus result);
void open(RepeatContext context);
void onError(RepeatContext context, Throwable e);
void close(RepeatContext context);
}
```
The `open` and `close` callbacks come before and after the entire iteration. `before`,`after`, and `onError` apply to the individual `RepeatCallback` calls.
Note that, when there is more than one listener, they are in a list, so there is an
order. In this case, `open` and `before` are called in the same order while `after`,`onError`, and `close` are called in reverse order.
### Parallel Processing
Implementations of `RepeatOperations` are not restricted to executing the callback
sequentially. It is quite important that some implementations are able to execute their
callbacks in parallel. To this end, Spring Batch provides the`TaskExecutorRepeatTemplate`, which uses the Spring `TaskExecutor` strategy to run the`RepeatCallback`. The default is to use a `SynchronousTaskExecutor`, which has the effect
of executing the whole iteration in the same thread (the same as a normal`RepeatTemplate`).
### Declarative Iteration
Sometimes there is some business processing that you know you want to repeat every time
it happens. The classic example of this is the optimization of a message pipeline. It is
more efficient to process a batch of messages, if they are arriving frequently, than to
bear the cost of a separate transaction for every message. Spring Batch provides an AOP
interceptor that wraps a method call in a `RepeatOperations` object for just this
purpose. The `RepeatOperationsInterceptor` executes the intercepted method and repeats
according to the `CompletionPolicy` in the provided `RepeatTemplate`.
The following example shows declarative iteration using the Spring AOP namespace to
repeat a service call to a method called `processMessage` (for more detail on how to
configure AOP interceptors, see the Spring User Guide):
```
<aop:config>
<aop:pointcut id="transactional"
expression="execution(* com..*Service.processMessage(..))" />
<aop:advisor pointcut-ref="transactional"
advice-ref="retryAdvice" order="-1"/>
</aop:config>
<bean id="retryAdvice" class="org.spr...RepeatOperationsInterceptor"/>
```
The following example demonstrates using Java configuration to
repeat a service call to a method called `processMessage` (for more detail on how to
configure AOP interceptors, see the Spring User Guide):
```
@Bean
public MyService myService() {
ProxyFactory factory = new ProxyFactory(RepeatOperations.class.getClassLoader());
factory.setInterfaces(MyService.class);
factory.setTarget(new MyService());
MyService service = (MyService) factory.getProxy();
JdkRegexpMethodPointcut pointcut = new JdkRegexpMethodPointcut();
pointcut.setPatterns(".*processMessage.*");
RepeatOperationsInterceptor interceptor = new RepeatOperationsInterceptor();
((Advised) service).addAdvisor(new DefaultPointcutAdvisor(pointcut, interceptor));
return service;
}
```
The preceding example uses a default `RepeatTemplate` inside the interceptor. To change
the policies, listeners, and other details, you can inject an instance of`RepeatTemplate` into the interceptor.
If the intercepted method returns `void`, then the interceptor always returns`RepeatStatus.CONTINUABLE` (so there is a danger of an infinite loop if the`CompletionPolicy` does not have a finite end point). Otherwise, it returns`RepeatStatus.CONTINUABLE` until the return value from the intercepted method is `null`,
at which point it returns `RepeatStatus.FINISHED`. Consequently, the business logic
inside the target method can signal that there is no more work to do by returning `null`or by throwing an exception that is re-thrown by the `ExceptionHandler` in the provided`RepeatTemplate`.
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
# Getting Help
If you have trouble with Spring Boot, we would like to help.
* Try the [How-to documents](howto.html#howto).
They provide solutions to the most common questions.
* Learn the Spring basics.
Spring Boot builds on many other Spring projects.
Check the [spring.io](https://spring.io) web-site for a wealth of reference documentation.
If you are starting out with Spring, try one of the [guides](https://spring.io/guides).
* Ask a question.
We monitor [stackoverflow.com](https://stackoverflow.com) for questions tagged with [`spring-boot`](https://stackoverflow.com/tags/spring-boot).
* Report bugs with Spring Boot at [github.com/spring-projects/spring-boot/issues](https://github.com/spring-projects/spring-boot/issues).
Note:
All of Spring Boot is open source, including the documentation. If you find problems with the docs or if you want to improve them, please [get involved](https://github.com/spring-projects/spring-boot/tree/v2.6.4).
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
# Legal
Copyright © 2012-2022
Copies of this document may be made for your own use and for distribution to
others, provided that you do not charge any fee for such copies and further
provided that each copy contains this Copyright Notice, whether distributed in
print or electronically.
此差异已折叠。
# Upgrading Spring Boot
Instructions for how to upgrade from earlier versions of Spring Boot are provided on the project [wiki](https://github.com/spring-projects/spring-boot/wiki).
Follow the links in the [release notes](https://github.com/spring-projects/spring-boot/wiki#release-notes) section to find the version that you want to upgrade to.
Upgrading instructions are always the first item in the release notes.
If you are more than one release behind, please make sure that you also review the release notes of the versions that you jumped.
## 1. Upgrading from 1.x
If you are upgrading from the `1.x` release of Spring Boot, check the [“migration guide” on the project wiki](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.0-Migration-Guide) that provides detailed upgrade instructions.
Check also the [“release notes”](https://github.com/spring-projects/spring-boot/wiki) for a list of “new and noteworthy” features for each release.
## 2. Upgrading to a new feature release
When upgrading to a new feature release, some properties may have been renamed or removed.
Spring Boot provides a way to analyze your application’s environment and print diagnostics at startup, but also temporarily migrate properties at runtime for you.
To enable that feature, add the following dependency to your project:
```
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-properties-migrator</artifactId>
<scope>runtime</scope>
</dependency>
```
| |Properties that are added late to the environment, such as when using `@PropertySource`, will not be taken into account.|
|---|------------------------------------------------------------------------------------------------------------------------|
| |Once you finish the migration, please make sure to remove this module from your project’s dependencies.|
|---|-------------------------------------------------------------------------------------------------------|
## 3. Upgrading the Spring Boot CLI
To upgrade an existing CLI installation, use the appropriate package manager command (for example, `brew upgrade`).
If you manually installed the CLI, follow the [standard instructions](getting-started.html#getting-started.installing.cli.manual-installation), remembering to update your `PATH` environment variable to remove any older references.
## 4. What to Read Next
Once you’ve decided to upgrade your application, you can find detailed information regarding specific features in the rest of the document.
Spring Boot’s documentation is specific to that version, so any information that you find in here will contain the most up-to-date changes that are in that version.
此差异已折叠。
此差异已折叠。
此差异已折叠。
# Spring Cloud
\ No newline at end of file
# Spring Cloud Documentation
## 1. About the Documentation
The Spring Cloud reference guide is available as
* [Multi-page HTML](https://docs.spring.io/spring-cloud/docs/2021.0.1/reference/html)
* [Single-page HTML](https://docs.spring.io/spring-cloud/docs/2021.0.1/reference/htmlsingle)
* [PDF](https://docs.spring.io/spring-cloud/docs/2021.0.1/reference/pdf/spring-cloud.pdf)
Copies of this document may be made for your own use and for distribution to others,
provided that you do not charge any fee for such copies and further provided that each
copy contains this Copyright Notice, whether distributed in print or electronically.
## 2. Getting Help
If you have trouble with Spring Cloud, we would like to help.
* Learn the Spring Cloud basics. If you are
starting out with Spring Cloud, try one of the [guides](https://spring.io/guides).
* Ask a question. We monitor [stackoverflow.com](https://stackoverflow.com) for questions
tagged with [`spring-cloud`](https://stackoverflow.com/tags/spring-cloud).
* Chat with us at [Spring Cloud Gitter](https://gitter.im/spring-cloud/spring-cloud)
| |All of Spring Cloud is open source, including the documentation. If you find<br/>problems with the docs or if you want to improve them, please get involved.|
|---|------------------------------------------------------------------------------------------------------------------------------------------------------------|
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
# Spring Data
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册