提交 839fd030 编写于 作者: R Robert Metzger 提交者: Stephan Ewen

[docs] Prepare documentation for 0.8 release

上级 0b2edf13
......@@ -44,7 +44,7 @@ title of the page. This title is used as the top-level heading for the page.
Furthermore, you can access variables found in `docs/_config.yml` as follows:
{{ site.FLINK_VERSION_STABLE }}
{{ site.FLINK_VERSION_SHORT }}
This will be replaced with the value of the variable when generating the docs.
......
......@@ -22,9 +22,9 @@
# {{ site.CONFIG_KEY }}
#------------------------------------------------------------------------------
FLINK_VERSION_STABLE: "0.7.0-incubating" # this variable can point to a SNAPSHOT version in the git source.
FLINK_VERSION_SHORT: "0.7.0"
FLINK_VERSION_HADOOP_2_STABLE: 0.7-hadoop2-incubating
FLINK_VERSION_HADOOP1_STABLE: "0.8.0-hadoop1" # this variable can point to a SNAPSHOT version in the git source.
FLINK_VERSION_SHORT: "0.8.0"
FLINK_VERSION_HADOOP2_STABLE: "0.8.0"
FLINK_SCALA_VERSION: "2.10.4"
FLINK_SCALA_VERSION_SHORT: "2.10"
FLINK_ISSUES_URL: https://issues.apache.org/jira/browse/FLINK
......@@ -33,8 +33,8 @@ FLINK_GITHUB_URL: https://github.com/apache/incubator-flink
FLINK_WEBSITE_URL: http://flink.incubator.apache.org
FLINK_DOWNLOAD_URL: http://flink.incubator.apache.org/downloads.html
FLINK_DOWNLOAD_URL_HADOOP_1_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop1.tgz
FLINK_DOWNLOAD_URL_HADOOP_2_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2.tgz
FLINK_DOWNLOAD_URL_HADOOP1_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop1.tgz
FLINK_DOWNLOAD_URL_HADOOP2_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2.tgz
FLINK_DOWNLOAD_URL_YARN_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2-yarn.tgz
FLINK_WGET_URL_YARN_STABLE: http://artfiles.org/apache.org/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2-yarn.tgz
......
......@@ -46,33 +46,33 @@ The command line can be used to
- Run example program with no arguments.
./bin/flink run ./examples/flink-java-examples-{{ site.FLINK_VERSION_STABLE }}-WordCount.jar
./bin/flink run ./examples/flink-java-examples-{{ site.FLINK_VERSION_SHORT }}-WordCount.jar
- Run example program with arguments for input and result files
./bin/flink run ./examples/flink-java-examples-{{ site.FLINK_VERSION_STABLE }}-WordCount.jar \
./bin/flink run ./examples/flink-java-examples-{{ site.FLINK_VERSION_SHORT }}-WordCount.jar \
file:///home/user/hamlet.txt file:///home/user/wordcount_out
- Run example program with parallelism 16 and arguments for input and result files
./bin/flink run -p 16 ./examples/flink-java-examples-{{ site.FLINK_VERSION_STABLE }}-WordCount.jar \
./bin/flink run -p 16 ./examples/flink-java-examples-{{ site.FLINK_VERSION_SHORT }}-WordCount.jar \
file:///home/user/hamlet.txt file:///home/user/wordcount_out
- Run example program on a specific JobManager:
./bin/flink run -m myJMHost:6123 \
./examples/flink-java-examples-{{ site.FLINK_VERSION_STABLE }}-WordCount.jar \
./examples/flink-java-examples-{{ site.FLINK_VERSION_SHORT }}-WordCount.jar \
-file:///home/user/hamlet.txt file:///home/user/wordcount_out
- Display the expected arguments for the WordCount example program:
./bin/flink info -d ./examples/flink-java-examples-{{ site.FLINK_VERSION_STABLE }}-WordCount.jar
./bin/flink info -d ./examples/flink-java-examples-{{ site.FLINK_VERSION_SHORT }}-WordCount.jar
- Display the optimized execution plan for the WordCount example program as JSON:
./bin/flink info -e
./examples/flink-java-examples-{{ site.FLINK_VERSION_STABLE }}-WordCount.jar \
./examples/flink-java-examples-{{ site.FLINK_VERSION_SHORT }}-WordCount.jar \
file:///home/user/hamlet.txt file:///home/user/wordcount_out
- List scheduled and running jobs (including their JobIDs):
......
......@@ -49,7 +49,7 @@ If you are developing your program as a Maven project, you have to add the
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>{{ site.FLINK_VERSION_STABLE }}</version>
<version>{{ site.FLINK_VERSION_SHORT }}</version>
</dependency>
~~~
......@@ -95,7 +95,7 @@ If you are developing your program in a Maven project, you have to add the
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>{{ site.FLINK_VERSION_STABLE }}</version>
<version>{{ site.FLINK_VERSION_SHORT }}</version>
</dependency>
~~~
......
......@@ -40,7 +40,7 @@ interface. Hadoop ships adapters for FTP, [Hftp](http://hadoop.apache.org/docs/r
Flink has integrated testcases to validate the integration with [Tachyon](http://tachyon-project.org/).
Other file systems we tested the integration is the
[Google Cloud Storage Connector for Hadoop](https://cloud.google.com/hadoop/google-cloud-storage-connector).
[Google Cloud Storage Connector for Hadoop](https://cloud.google.com/hadoop/google-cloud-storage-connector) and [XtreemFS](http://www.xtreemfs.org/).
In order to use a Hadoop file system with Flink, make sure that the `flink-conf.yaml` has set the
`fs.hdfs.hadoopconf` property set to the Hadoop configuration directory.
......@@ -90,16 +90,16 @@ Execute the following commands:
curl https://raw.githubusercontent.com/apache/incubator-flink/master/flink-quickstart/quickstart.sh | bash
~~~
3. Set the the version of Flink to `{{site.FLINK_VERSION_HADOOP_2_STABLE}}` in the `pom.xml` file. The quickstart.sh script sets the version to the `hadoop1` version of Flink. Since the `microsoft-hadoop-azure` has been written for Hadoop 2.2 (mapreduce-API) version, we need to use the appropriate Flink version.
3. Set the the version of Flink to `{{site.FLINK_VERSION_HADOOP2_STABLE}}` in the `pom.xml` file. The quickstart.sh script sets the version to the `hadoop1` version of Flink. Since the `microsoft-hadoop-azure` has been written for Hadoop 2.2 (mapreduce-API) version, we need to use the appropriate Flink version.
Replace all occurences of `<version>{{site.FLINK_VERSION_STABLE}}</version>` with `<version>{{site.FLINK_VERSION_HADOOP_2_STABLE}}</version>`.
Replace all occurences of `<version>{{site.FLINK_VERSION_SHORT}}</version>` with `<version>{{site.FLINK_VERSION_HADOOP2_STABLE}}</version>`.
4. Add the following dependencies (in the `<dependencies>` section) to your `pom.xml` file:
~~~xml
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility</artifactId>
<version>{{site.FLINK_VERSION_HADOOP_2_STABLE}}</version>
<version>{{site.FLINK_VERSION_HADOOP2_STABLE}}</version>
</dependency>
<dependency>
<groupId>com.microsoft.hadoop</groupId>
......
......@@ -46,7 +46,7 @@ Add the following dependency to your `pom.xml` to use the Hadoop Compatibility L
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility</artifactId>
<version>{{site.FLINK_VERSION_STABLE}}</version>
<version>{{site.FLINK_VERSION_SHORT}}</version>
</dependency>
~~~
......
......@@ -32,10 +32,10 @@ a thorough introduction of the Flink API please refer to the
## Download
You can download Flink from the [downloads]({{ site.FLINK_DOWNLOAD_URL }}) page
of the [project website]({{ site.FLINK_WEBSITE_URL }}). This documentation is for version {{ site.FLINK_VERSION_STABLE }}. Be careful
of the [project website]({{ site.FLINK_WEBSITE_URL }}). This documentation is for version {{ site.FLINK_VERSION_SHORT }}. Be careful
when picking a version, there are different versions depending on the Hadoop and/or
HDFS version that you want to use with Flink. Please refer to [building](building.html) if you
want to build Flink yourself from the source.
In Version {{ site.FLINK_VERSION_STABLE}} the Scala API uses Scala {{ site.FLINK_SCALA_VERSION_SHORT }}. Please make
In Version {{ site.FLINK_VERSION_SHORT}} the Scala API uses Scala {{ site.FLINK_SCALA_VERSION_SHORT }}. Please make
sure to use a compatible version.
......@@ -49,7 +49,7 @@ Use one of the following commands to __create a project__:
$ mvn archetype:generate \
-DarchetypeGroupId=org.apache.flink \
-DarchetypeArtifactId=flink-quickstart-java \
-DarchetypeVersion={{site.FLINK_VERSION_STABLE}}
-DarchetypeVersion={{site.FLINK_VERSION_SHORT}}
{% endhighlight %}
This allows you to <strong>name your newly created project</strong>. It will interactively ask you for the groupId, artifactId, and package name.
</div>
......
......@@ -45,7 +45,7 @@ If you are developing your program in a Maven project, you have to add the `flin
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>{{site.FLINK_VERSION_STABLE}}</version>
<version>{{site.FLINK_VERSION_SHORT}}</version>
</dependency>
~~~
......
......@@ -130,7 +130,7 @@ manually create the project, you can use the archetype and create a project by c
mvn archetype:generate /
-DarchetypeGroupId=org.apache.flink/
-DarchetypeArtifactId=flink-quickstart-java /
-DarchetypeVersion={{site.FLINK_VERSION_STABLE }}
-DarchetypeVersion={{site.FLINK_VERSION_SHORT }}
{% endhighlight %}
</div>
<div data-lang="scala" markdown="1">
......@@ -138,7 +138,7 @@ mvn archetype:generate /
mvn archetype:generate /
-DarchetypeGroupId=org.apache.flink/
-DarchetypeArtifactId=flink-quickstart-scala /
-DarchetypeVersion={{site.FLINK_VERSION_STABLE }}
-DarchetypeVersion={{site.FLINK_VERSION_SHORT }}
{% endhighlight %}
</div>
</div>
......@@ -152,12 +152,12 @@ If you want to add Flink to an existing Maven project, add the following entry t
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>{{site.FLINK_VERSION_STABLE }}</version>
<version>{{site.FLINK_VERSION_SHORT }}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>{{site.FLINK_VERSION_STABLE }}</version>
<version>{{site.FLINK_VERSION_SHORT }}</version>
</dependency>
{% endhighlight %}
</div>
......@@ -166,12 +166,12 @@ If you want to add Flink to an existing Maven project, add the following entry t
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala</artifactId>
<version>{{site.FLINK_VERSION_STABLE }}</version>
<version>{{site.FLINK_VERSION_SHORT }}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>{{site.FLINK_VERSION_STABLE }}</version>
<version>{{site.FLINK_VERSION_SHORT }}</version>
</dependency>
{% endhighlight %}
</div>
......
......@@ -49,7 +49,7 @@ $ curl https://raw.githubusercontent.com/apache/incubator-flink/master/flink-qui
$ mvn archetype:generate \
-DarchetypeGroupId=org.apache.flink \
-DarchetypeArtifactId=flink-quickstart-scala \
-DarchetypeVersion={{site.FLINK_VERSION_STABLE}}
-DarchetypeVersion={{site.FLINK_VERSION_SHORT}}
{% endhighlight %}
This allows you to <strong>name your newly created project</strong>. It will interactively ask you for the groupId, artifactId, and package name.
</div>
......
......@@ -38,10 +38,10 @@ Download the ready to run binary package. Choose the Flink distribution that __m
<p>
<div class="tab-content text-center">
<div class="tab-pane active" id="bin-hadoop1">
<a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-1',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP_1_STABLE}}"><i class="icon-download"> </i> Download Flink for Hadoop 1.2</a>
<a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-1',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP1_STABLE}}"><i class="icon-download"> </i> Download Flink for Hadoop 1.2</a>
</div>
<div class="tab-pane" id="bin-hadoop2">
<a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-2',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP_2_STABLE}}"><i class="icon-download"> </i> Download Flink for Hadoop 2</a>
<a class="btn btn-info btn-lg" onclick="_gaq.push(['_trackEvent','Action','download-quickstart-setup-2',this.href]);" href="{{site.FLINK_DOWNLOAD_URL_HADOOP2_STABLE}}"><i class="icon-download"> </i> Download Flink for Hadoop 2</a>
</div>
</div>
</p>
......@@ -57,7 +57,7 @@ Download the ready to run binary package. Choose the Flink distribution that __m
~~~bash
$ cd ~/Downloads # Go to download directory
$ tar xzf flink-*.tgz # Unpack the downloaded archive
$ cd flink-{{site.FLINK_VERSION_STABLE}}
$ cd flink-{{site.FLINK_VERSION_SHORT}}
$ bin/start-local.sh # Start Flink
~~~
......@@ -78,7 +78,7 @@ Run the __Word Count example__ to see Flink at work.
* __Start the example program__:
~~~bash
$ bin/flink run ./examples/flink-java-examples-{{site.FLINK_VERSION_STABLE}}-WordCount.jar file://`pwd`/hamlet.txt file://`pwd`/wordcount-result.txt
$ bin/flink run ./examples/flink-java-examples-{{site.FLINK_VERSION_SHORT}}-WordCount.jar file://`pwd`/hamlet.txt file://`pwd`/wordcount-result.txt
~~~
* You will find a file called __wordcount-result.txt__ in your current directory.
......
......@@ -43,7 +43,7 @@ Add the following dependency to your `pom.xml` to use the Spargel.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-spargel</artifactId>
<version>{{site.FLINK_VERSION_STABLE}}</version>
<version>{{site.FLINK_VERSION_SHORT}}</version>
</dependency>
~~~
......
......@@ -42,7 +42,7 @@ Add the following dependency to your `pom.xml` to use the Flink Streaming.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-core</artifactId>
<version>{{site.FLINK_VERSION_STABLE}}</version>
<version>{{site.FLINK_VERSION_SHORT}}</version>
</dependency>
~~~
......
......@@ -29,8 +29,8 @@ Start YARN session with 4 Task Managers (each with 4 GB of Heapspace):
~~~bash
wget {{ site.FLINK_WGET_URL_YARN_STABLE }}
tar xvzf flink-{{ site.FLINK_VERSION_STABLE }}-bin-hadoop2-yarn.tgz
cd flink-yarn-{{ site.FLINK_VERSION_STABLE }}/
tar xvzf flink-{{ site.FLINK_VERSION_SHORT }}-bin-hadoop2-yarn.tgz
cd flink-yarn-{{ site.FLINK_VERSION_SHORT }}/
./bin/yarn-session.sh -n 4 -jm 1024 -tm 4096
~~~
......@@ -60,11 +60,11 @@ Download the YARN tgz package on the [download page]({{site.baseurl}}/downloads.
Extract the package using:
~~~bash
tar xvzf flink-{{ site.FLINK_VERSION_STABLE }}-bin-hadoop2-yarn.tgz
cd flink-yarn-{{site.FLINK_VERSION_STABLE }}/
tar xvzf flink-{{ site.FLINK_VERSION_SHORT }}-bin-hadoop2-yarn.tgz
cd flink-yarn-{{site.FLINK_VERSION_SHORT }}/
~~~
If you want to build the YARN .tgz file from sources, follow the [build instructions](building.html). Make sure to use the `-Dhadoop.profile=2` profile. You can find the result of the build in `flink-dist/target/flink-{{ site.FLINK_VERSION_STABLE }}-bin/flink-yarn-{{ site.FLINK_VERSION_STABLE }}/` (*Note: The version might be different for you* ).
If you want to build the YARN .tgz file from sources, follow the [build instructions](building.html). You can find the result of the build in `flink-dist/target/flink-{{ site.FLINK_VERSION_SHORT }}-bin/flink-yarn-{{ site.FLINK_VERSION_SHORT }}/` (*Note: The version might be different for you* ).
#### Start a Session
......@@ -157,14 +157,14 @@ Use the *run* action to submit a job to YARN. The client is able to determine th
~~~bash
wget -O apache-license-v2.txt http://www.apache.org/licenses/LICENSE-2.0.txt
./bin/flink run -j ./examples/flink-java-examples-{{site.FLINK_VERSION_STABLE }}-WordCount.jar \
./bin/flink run -j ./examples/flink-java-examples-{{site.FLINK_VERSION_SHORT }}-WordCount.jar \
-a 1 file://`pwd`/apache-license-v2.txt file://`pwd`/wordcount-result.txt
~~~
If there is the following error, make sure that all TaskManagers started:
~~~bash
Exception in thread "main" org.apache.flinkcompiler.CompilerException:
Exception in thread "main" org.apache.flink.compiler.CompilerException:
Available instances could not be determined from job manager: Connection timed out.
~~~
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册