提交 dbcbc091 编写于 作者: R Rui Li 提交者: Kurt Young

[FLINK-13352][docs] Using hive connector with hive-1.2.1 needs libfb303 jar

This closes #9223
上级 21621fbc
......@@ -97,25 +97,17 @@ The ultimate goal for integrating Flink with Hive metadata is that:
2. Meta-objects created by `HiveCatalog` can be written back to Hive metastore such that Hive and other Hive-compatible applications can consume.
## User-configured Catalog
Catalogs are pluggable. Users can develop custom catalogs by implementing the `Catalog` interface, which defines a set of APIs for reading and writing catalog meta-objects such as database, tables, partitions, views, and functions.
HiveCatalog
-----------
## Supported Hive Versions
### Supported Hive Versions
Flink's `HiveCatalog` officially supports Hive 2.3.4 and 1.2.1.
The Hive version is explicitly specified as a String, either by passing it to the constructor when creating `HiveCatalog` instances directly in Table API or specifying it in yaml config file in SQL CLI. The Hive version string are `2.3.4` and `1.2.1`.
## Case Insensitive to Meta-Object Names
### Case Insensitive to Meta-Object Names
Note that Hive Metastore stores meta-object names in lower cases. Thus, unlike `GenericInMemoryCatalog`, `HiveCatalog` is case-insensitive to meta-object names, and users need to be cautious on that.
## Dependencies
### Dependencies
To use `HiveCatalog`, users need to include the following dependency jars.
......@@ -140,6 +132,7 @@ For Hive 1.2.1, users need:
- hive-metastore-1.2.1.jar
- hive-exec-1.2.1.jar
- libfb303-0.9.3.jar
// Hadoop dependencies
......@@ -156,7 +149,7 @@ If you don't have Hive dependencies at hand, they can be found at [mvnrepostory.
Note that users need to make sure the compatibility between their Hive versions and Hadoop versions. Otherwise, there may be potential problem, for example, Hive 2.3.4 is compiled against Hadoop 2.7.2, you may run into problems when using Hive 2.3.4 with Hadoop 2.4.
## Data Type Mapping
### Data Type Mapping
For both Flink and Hive tables, `HiveCatalog` stores table schemas by mapping them to Hive table schemas with Hive data types. Types are dynamically mapped back on read.
......@@ -196,6 +189,10 @@ The following limitations in Hive's data types impact the mapping between Flink
\** maximum length is 65535
## User-configured Catalog
Catalogs are pluggable. Users can develop custom catalogs by implementing the `Catalog` interface, which defines a set of APIs for reading and writing catalog meta-objects such as database, tables, partitions, views, and functions.
Catalog Registration
--------------------
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册