Every Flink application depends on a set of Flink libraries. At the bare minimum, the application depends on the Flink APIs. Many applications depend in addition on certain connector libraries (like Kafka, Cassandra, etc.). When running Flink applications (either in a distributed deployment, or in the IDE for testing), the Flink runtime library must be available as well.
***Flink Core Dependencies**: Flink itself consists of a set of classes and dependencies that are needed to run the system, for example coordination, networking, checkpoints, failover, APIs, operations (such as windowing), resource management, etc. The set of all these classes and dependencies forms the core of Flink’s runtime and must be present when a Flink application is started.
These core classes and dependencies are packaged in the `flink-dist` jar. They are part of Flink’s `lib` folder and part of the basic Flink container images. Think of these dependencies as similar to Java’s core library (`rt.jar`, `charsets.jar`, etc.), which contains the classes like `String` and `List`.
* The **用户应用程序依赖项** 是特定用户应用程序需要的所有连接器、格式或库。
The Flink Core Dependencies do not contain any connectors or libraries (CEP, SQL, ML, etc.) in order to avoid having an excessive number of dependencies and classes in the classpath by default. In fact, we try to keep the core dependencies as slim as possible to keep the default classpath small and avoid dependency clashes.
The user application is typically packaged into an _application jar_, which contains the application code and the required connector and library dependencies.
The user application dependencies explicitly do not include the Flink DataSet / DataStream APIs and runtime dependencies, because those are already part of Flink’s Core Dependencies.
## Setting up a Project: Basic Dependencies
Every Flink application needs as the bare minimum the API dependencies, to develop against. For Maven, you can use the [Java Project Template](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/projectsetup/java_api_quickstart.html) or [Scala Project Template](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/projectsetup/scala_api_quickstart.html) to create a program skeleton with these initial dependencies.
When setting up a project manually, you need to add the following dependencies for the Java/Scala API (here presented in Maven syntax, but the same dependencies apply to other build tools (Gradle, SBT, etc.) as well.
**Important:** Please note that all these dependencies have their scope set to _provided_. That means that they are needed to compile against, but that they should not be packaged into the project’s resulting application jar file - these dependencies are Flink Core Dependencies, which are already available in any setup.
It is highly recommended to keep the dependencies in scope _provided_. If they are not set to _provided_, the best case is that the resulting JAR becomes excessively large, because it also contains all Flink core dependencies. The worst case is that the Flink core dependencies that are added to the application’s jar file clash with some of your own dependency versions (which is normally avoided through inverted classloading).
**Note on IntelliJ:** To make the applications run within IntelliJ IDEA, the Flink dependencies need to be declared in scope _compile_ rather than _provided_. Otherwise IntelliJ will not add them to the classpath and the in-IDE execution will fail with a `NoClassDefFountError`. To avoid having to declare the dependency scope as _compile_ (which is not recommended, see above), the above linked Java- and Scala project templates use a trick: They add a profile that selectively activates when the application is run in IntelliJ and only then promotes the dependencies to scope _compile_, without affecting the packaging of the JAR files.
## 添加连接器和库依赖关系
## Adding Connector and Library Dependencies
Most applications need specific connectors or libraries to run, for example a connector to Kafka, Cassandra, etc. These connectors are not part of Flink’s core dependencies and must hence be added as dependencies to the application
Below is an example adding the connector for Kafka 0.10 as a dependency (Maven syntax):
We recommend to package the application code and all its required dependencies into one _jar-with-dependencies_ which we refer to as the _application jar_. The application jar can be submitted to an already running Flink cluster, or added to a Flink application container image.
Projects created from the [Java Project Template](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/projectsetup/java_api_quickstart.html) or [Scala Project Template](//ci.apache.org/projects/flink/flink-docs-release-1.7/dev/projectsetup/scala_api_quickstart.html) are configured to automatically include the application dependencies into the application jar when running `mvn clean package`. For projects that are not set up from those templates, we recommend to add the Maven Shade Plugin (as listed in the Appendix below) to build the application jar with all required dependencies.
**Important:** For Maven (and other build tools) to correctly package the dependencies into the application jar, these application dependencies must be specified in scope _compile_ (unlike the core dependencies, which must be specified in scope _provided_).
Scala versions (2.10, 2.11, 2.12, etc.) are not binary compatible with one another. For that reason, Flink for Scala 2.11 cannot be used with an application that uses Scala 2.12.
All Flink dependencies that (transitively) depend on Scala are suffixed with the Scala version that they are built for, for example `flink-streaming-scala_2.11`.
## Scala 版本
Developers that only use Java can pick any Scala version, Scala developers need to pick the Scala version that matches their application’s Scala version.
Please refer to the [build guide](//ci.apache.org/projects/flink/flink-docs-release-1.7/flinkDev/building.html#scala-versions) for details on how to build Flink for a specific Scala version.
**Note:** Because of major breaking changes in Scala 2.12, Flink 1.5 currently builds only for Scala 2.11. We aim to add support for Scala 2.12 in the next versions.
**General rule: It should never be necessary to add Hadoop dependencies directly to your application.** _(The only exception being when using existing Hadoop input-/output formats with Flink’s Hadoop compatibility wrappers)_
If you want to use Flink with Hadoop, you need to have a Flink setup that includes the Hadoop dependencies, rather than adding Hadoop as an application dependency. Please refer to the [Hadoop Setup Guide](//ci.apache.org/projects/flink/flink-docs-release-1.7/ops/deployment/hadoop.html) for details.
There are two main reasons for that design:
* Some Hadoop interaction happens in Flink’s core, possibly before the user application is started, for example setting up HDFS for checkpoints, authenticating via Hadoop’s Kerberos tokens, or deployment on YARN.
* Flink’s inverted classloading approach hides many transitive dependencies from the core dependencies. That applies not only to Flink’s own core dependencies, but also to Hadoop’s dependencies when present in the setup. That way, applications can use different versions of the same dependencies without running into dependency conflicts (and trust us, that’s a big deal, because Hadoops dependency tree is huge.)
If you need Hadoop dependencies during testing or development inside the IDE (for example for HDFS access), please configure these dependencies similar to the scope of the dependencies to _test_ or to _provided_.
## Appendix: Template for building a Jar with Dependencies
To build an application JAR that contains all dependencies required for declared connectors and libraries, you can use the following shade plugin definition: