提交 fae83c0b 编写于 作者: Z zhangminglei 提交者: Greg Hogan

[FLINK-7967] [config] Deprecate Hadoop specific Flink configuration options

This closes #4946
上级 3561222c
......@@ -72,8 +72,6 @@ definition. As another example, if this is set to `hdfs://localhost:9000/`, then
without explicit scheme definition, such as `/user/USERNAME/in.txt`, is going to be transformed into
`hdfs://localhost:9000/user/USERNAME/in.txt`. This scheme is used **ONLY** if no other scheme is specified (explicitly) in the user-provided `URI`.
- `fs.hdfs.hadoopconf`: The absolute path to the Hadoop File System's (HDFS) configuration **directory** (OPTIONAL VALUE). Specifying this value allows programs to reference HDFS files using short URIs (`hdfs:///path/to/files`, without including the address and port of the NameNode in the file URI). Without this option, HDFS files can be accessed, but require fully qualified URIs like `hdfs://address:port/path/to/files`. This option also causes file writers to pick up the HDFS's default values for block sizes and replication factors. Flink will look for the "core-site.xml" and "hdfs-site.xml" files in the specified directory.
- `classloader.resolve-order`: Whether Flink should use a child-first `ClassLoader` when loading
user-code classes or a parent-first `ClassLoader`. Can be one of `parent-first` or `child-first`. (default: `child-first`)
......@@ -246,9 +244,13 @@ Default value is the `akka.ask.timeout`.
### HDFS
<div class="alert alert-warning">
<strong>Note:</strong> These keys are deprecated and it is recommended to configure the Hadoop path with the environment variable *HADOOP_CONF_DIR* instead.
</div>
These parameters configure the default HDFS used by Flink. Setups that do not specify a HDFS configuration have to specify the full path to HDFS files (`hdfs://address:port/path/to/files`) Files will also be written with default HDFS parameters (block size, replication factor).
- `fs.hdfs.hadoopconf`: The absolute path to the Hadoop configuration directory. The system will look for the "core-site.xml" and "hdfs-site.xml" files in that directory (DEFAULT: null).
- `fs.hdfs.hadoopconf`: The absolute path to the Hadoop File System's (HDFS) configuration **directory** (OPTIONAL VALUE). Specifying this value allows programs to reference HDFS files using short URIs (`hdfs:///path/to/files`, without including the address and port of the NameNode in the file URI). Without this option, HDFS files can be accessed, but require fully qualified URIs like `hdfs://address:port/path/to/files`. This option also causes file writers to pick up the HDFS's default values for block sizes and replication factors. Flink will look for the "core-site.xml" and "hdfs-site.xml" files in the specified directory.
- `fs.hdfs.hdfsdefault`: The absolute path of Hadoop's own configuration file "hdfs-default.xml" (DEFAULT: null).
......
......@@ -595,19 +595,28 @@ public final class ConfigConstants {
/**
* Path to hdfs-defaul.xml file
*
* @deprecated Use environment variable HADOOP_CONF_DIR instead.
*/
@Deprecated
public static final String HDFS_DEFAULT_CONFIG = "fs.hdfs.hdfsdefault";
/**
* Path to hdfs-site.xml file
*
* @deprecated Use environment variable HADOOP_CONF_DIR instead.
*/
@Deprecated
public static final String HDFS_SITE_CONFIG = "fs.hdfs.hdfssite";
/**
* Path to Hadoop configuration
*
* @deprecated Use environment variable HADOOP_CONF_DIR instead.
*/
@Deprecated
public static final String PATH_HADOOP_CONFIG = "fs.hdfs.hadoopconf";
// ------------------------ File System Behavior ------------------------
/**
......
......@@ -151,6 +151,9 @@ web.port: 8081
# Path to the Hadoop configuration directory.
#
# Note: these keys are deprecated and it is recommended to configure the Hadoop
# path with the environment variable 'HADOOP_CONF_DIR' instead.
#
# This configuration is used when writing into HDFS. Unless specified otherwise,
# HDFS file creation will use HDFS default settings with respect to block-size,
# replication factor, etc.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册