提交 36a8fdad 编写于 作者: S Stephan Ewen

[docs] Various fixes in docs

  - corrected links to download page
  - corrected download urls
  - corrected wget shortcut for yarn setup
  - corrected links to faq in yarn setup
  - corrected guide for manual yarn build
  - FAQ layout
上级 4bb04775
......@@ -19,6 +19,7 @@ FLINK_DOWNLOAD_URL: http://flink.incubator.apache.org/downloads.html
FLINK_DOWNLOAD_URL_HADOOP_1_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop1.tgz
FLINK_DOWNLOAD_URL_HADOOP_2_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2.tgz
FLINK_DOWNLOAD_URL_YARN_STABLE: http://www.apache.org/dyn/closer.cgi/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2-yarn.tgz
FLINK_WGET_URL_YARN_STABLE: http://artfiles.org/apache.org/incubator/flink/flink-0.7.0-incubating/flink-0.7.0-incubating-bin-hadoop2-yarn.tgz
#------------------------------------------------------------------------------
# BUILD CONFIG
......
......@@ -251,7 +251,7 @@ guide.
## Flink Setup
Go to the [downloads page](downloads/) and get the ready to run
Go to the [downloads page]({{site.baseurl}}/downloads.html) and get the ready to run
package. Make sure to pick the Flink package **matching your Hadoop
version**.
......
......@@ -79,6 +79,7 @@ In particular, if your function is an inner class, or anonymous inner class,
it contains a hidden reference to the enclosing class (usually called `this$0`, if you look
at the function in the debugger). If the enclosing class is not serializable, this is probably
the source of the error. Solutions are to
- make the function a standalone class, or a static inner class (no more reference to the enclosing class)
- make the enclosing class serializable
- use a Java 8 lambda function.
......
......@@ -9,7 +9,7 @@ This documentation is intended to provide instructions on how to run Flink local
## Download
Go to the [downloads page]({{site.baseurl}}/downloads/) and get the ready to run package. If you want to interact with Hadoop (e.g. HDFS or HBase), make sure to pick the Flink package **matching your Hadoop version**. When in doubt or you plan to just work with the local file system pick the package for Hadoop 1.2.x.
Go to the [downloads page]({{site.baseurl}}/downloads.html) and get the ready to run package. If you want to interact with Hadoop (e.g. HDFS or HBase), make sure to pick the Flink package **matching your Hadoop version**. When in doubt or you plan to just work with the local file system pick the package for Hadoop 1.2.x.
## Requirements
......@@ -116,4 +116,4 @@ export SHELLOPTS
set -o igncr
~~~
Save the file and open a new bash shell.
\ No newline at end of file
Save the file and open a new bash shell.
......@@ -10,7 +10,7 @@ title: "YARN Setup"
Start YARN session with 4 Task Managers (each with 4 GB of Heapspace):
~~~bash
wget {{ site.FLINK_DOWNLOAD_URL_YARN_STABLE }}
wget {{ site.FLINK_WGET_URL_YARN_STABLE }}
tar xvzf flink-{{ site.FLINK_VERSION_STABLE }}-bin-hadoop2-yarn.tgz
cd flink-yarn-{{ site.FLINK_VERSION_STABLE }}/
./bin/yarn-session.sh -n 4 -jm 1024 -tm 4096
......@@ -27,7 +27,7 @@ Apache [Hadoop YARN](http://hadoop.apache.org/) is a cluster resource management
- Apache Hadoop 2.2
- HDFS (Hadoop Distributed File System)
If you have troubles using the Flink YARN client, have a look in the [FAQ section]({{site.baseurl}}/docs/0.5/general/faq.html).
If you have troubles using the Flink YARN client, have a look in the [FAQ section](faq.html).
### Start Flink Session
......@@ -37,18 +37,18 @@ A session will start all required Flink services (JobManager and TaskManagers) s
#### Download Flink for YARN
Download the YARN tgz package on the [download page]({{site.baseurl}}/downloads/). It contains the required files.
If you want to build the YARN .tgz file from sources, follow the [build instructions](building.html). Make sure to use the `-Dhadoop.profile=2` profile. You can find the file in `flink-dist/target/flink-dist-{{site.docs_05_stable}}-yarn.tar.gz` (*Note: The version might be different for you* ).
Download the YARN tgz package on the [download page]({{site.baseurl}}/downloads.html). It contains the required files.
Extract the package using:
~~~bash
tar xvzf flink-dist-{{site.FLINK_VERSION_STABLE }}-yarn.tar.gz
tar xvzf flink-{{ site.FLINK_VERSION_STABLE }}-bin-hadoop2-yarn.tgz
cd flink-yarn-{{site.FLINK_VERSION_STABLE }}/
~~~
If you want to build the YARN .tgz file from sources, follow the [build instructions](building.html). Make sure to use the `-Dhadoop.profile=2` profile. You can find the result of the build in `flink-dist/target/flink-{{ site.FLINK_VERSION_STABLE }}-bin/flink-yarn-{{ site.FLINK_VERSION_STABLE }}/` (*Note: The version might be different for you* ).
#### Start a Session
Use the following command to start a session
......@@ -202,4 +202,5 @@ The next step of the client is to request (step 2) a YARN container to start the
The *JobManager* and AM are running in the same container. Once they successfully started, the AM knows the address of the JobManager (its own host). It is generating a new Flink configuration file for the TaskManagers (so that they can connect to the JobManager). The file is also uploaded to HDFS. Additionally, the *AM* container is also serving Flink's web interface. The ports Flink is using for its services are the standard ports configured by the user + the application id as an offset. This allows users to execute multiple Flink YARN sessions in parallel.
After that, the AM starts allocating the containers for Flink's TaskManagers, which will download the jar file and the modified configuration from the HDFS. Once these steps are completed, Flink is set up and ready to accept Jobs.
\ No newline at end of file
After that, the AM starts allocating the containers for Flink's TaskManagers, which will download the jar file and the modified configuration from the HDFS. Once these steps are completed, Flink is set up and ready to accept Jobs.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册