@@ -14,16 +14,18 @@ The simplest way of building Flink is by running:
~~~bash
cd incubator-flink
mvn clean package-DskipTests
mvn clean install-DskipTests
~~~
This instructs Maven (`mvn`) to first remove all existing builds (`clean`) and then create a new Flink binary (`package`). The `-DskipTests` command prevents Maven from executing the unit tests.
This instructs Maven (`mvn`) to first remove all existing builds (`clean`) and then create a new Flink binary (`install`). The `-DskipTests` command prevents Maven from executing the unit tests.
[Read more](http://maven.apache.org/) about Apache Maven.
## Build Flink for a specific Hadoop Version
This section covers building Flink for a specific Hadoop version. Most users do not need to do this manually.
This section covers building Flink for a specific Hadoop version. Most users do not need to do this manually. The download page of Flink contains binary packages for common setups.
The problem is that Flink uses HDFS and YARN which are both dependencies from Apache Hadoop. There exist many different versions of Hadoop (from both the upstream project and the different Hadoop distributions). If a user is using a wrong combination of versions, exceptions like this one occur:
...
...
@@ -39,45 +41,48 @@ There are two main versions of Hadoop that we need to differentiate:
- Hadoop 2, with all versions starting with 2, like 2.2.0.
The main differentiation between Hadoop 1 and Hadoop 2 is the availability of Hadoop YARN (Hadoops cluster resource manager).
**To build Flink for Hadoop 2**, issue the following command:
By default, Flink is using the Hadoop 2 dependencies.
**To build Flink for Hadoop 1**, issue the following command:
~~~bash
mvn clean package -DskipTests-Dhadoop.profile=2
mvn clean install-DskipTests-Dhadoop.profile=1
~~~
The `-Dhadoop.profile=2` flag instructs Maven to build Flink with YARN support and the Hadoop 2 HDFS client.
The `-Dhadoop.profile=1` flag instructs Maven to build Flink for Hadoop 1. Note that the features included in Flink change when using a different Hadoop profile. In particular the support for YARN and the build-in HBase support are not available in Hadoop 1 builds.
Usually, this flag is sufficient for full support of Flink for Hadoop 2-versions.
However, you can also **specify a specific Hadoop version to build against**:
You can also **specify a specific Hadoop version to build against**:
The `-Pvendor-repos` activates a Maven [build profile](http://maven.apache.org/guides/introduction/introduction-to-profiles.html) that includes the repositories of popular Hadoop vendors such as Cloudera, Hortonworks, or MapR.
**Build Flink for `hadoop2` versions before 2.2.0**
Maven will automatically build Flink with its YARN client if the `-Dhadoop.profile=2` is set. But there were some changes in Hadoop versions before the 2.2.0 Hadoop release that are not supported by Flink's YARN client. Therefore, you can disable building the YARN client with the following string: `-P\!include-yarn`.
Maven will automatically build Flink with its YARN client. But there were some changes in Hadoop versions before the 2.2.0 Hadoop release that are not supported by Flink's YARN client. Therefore, you can disable building the YARN client with the following string: `-P\!include-yarn`.
So if you are building Flink for Hadoop `2.0.0-alpha`, use the following command:
The builds with Maven are controlled by [properties](http://maven.apache.org/pom.html#Properties) and <ahref="http://maven.apache.org/guides/introduction/introduction-to-profiles.html">build profiles</a>.
There are two profiles, one for hadoop1 and one for hadoop2. When the hadoop2 profile is enabled, the system will also build the YARN client.
The hadoop1 profile is used by default. To enable the hadoop2 profile, set `-Dhadoop.profile=2` when building.
There are two profiles, one for hadoop1 and one for hadoop2. When the hadoop2 profile is enabled (default), the system will also build the YARN client.
To enable the hadoop1 profile, set `-Dhadoop.profile=1` when building.
Depending on the profile, there are two Hadoop versions, set via properties. For "hadoop1", we use 1.2.1 by default, for "hadoop2" it is 2.2.0.
You can change these versions with the `hadoop-two.version` (or `hadoop-one.version`) property. For example `-Dhadoop-two.version=2.4.0`.