提交 437e28fb 编写于 作者: S SilverNarcissus 提交者: Jialin Qiao

Update readme (#536)

* update readme
上级 6a582753
...@@ -51,7 +51,7 @@ spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNA ...@@ -51,7 +51,7 @@ spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNA
import org.apache.iotdb.spark.db._ import org.apache.iotdb.spark.db._
val df = spark.read.format("org.apache.iotdb.sparkdb").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root").load val df = spark.read.format("org.apache.iotdb.spark.db").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root").load
df.printSchema() df.printSchema()
...@@ -64,7 +64,7 @@ spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNA ...@@ -64,7 +64,7 @@ spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNA
import org.apache.iotdb.spark.db._ import org.apache.iotdb.spark.db._
val df = spark.read.format("org.apache.iotdb.sparkdb").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root"). val df = spark.read.format("org.apache.iotdb.spark.db").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root").
option("lowerBound", [lower bound of time that you want query(include)]).option("upperBound", [upper bound of time that you want query(include)]). option("lowerBound", [lower bound of time that you want query(include)]).option("upperBound", [upper bound of time that you want query(include)]).
option("numPartition", [the partition number you want]).load option("numPartition", [the partition number you want]).load
...@@ -131,7 +131,7 @@ You can also use narrow table form which as follows: (You can see part 4 about h ...@@ -131,7 +131,7 @@ You can also use narrow table form which as follows: (You can see part 4 about h
``` ```
import org.apache.iotdb.spark.db._ import org.apache.iotdb.spark.db._
val wide_df = spark.read.format("org.apache.iotdb.sparkdb").option("url", "jdbc:iotdb://127.0.0.1:6667/").option("sql", "select * from root where time < 1100 and time > 1000").load val wide_df = spark.read.format("org.apache.iotdb.spark.db").option("url", "jdbc:iotdb://127.0.0.1:6667/").option("sql", "select * from root where time < 1100 and time > 1000").load
val narrow_df = Transformer.toNarrowForm(spark, wide_df) val narrow_df = Transformer.toNarrowForm(spark, wide_df)
``` ```
...@@ -158,7 +158,7 @@ public class Example { ...@@ -158,7 +158,7 @@ public class Example {
.master("local[*]") .master("local[*]")
.getOrCreate(); .getOrCreate();
Dataset<Row> df = spark.read().format("org.apache.iotdb.sparkdb") Dataset<Row> df = spark.read().format("org.apache.iotdb.spark.db")
.option("url","jdbc:iotdb://127.0.0.1:6667/") .option("url","jdbc:iotdb://127.0.0.1:6667/")
.option("sql","select * from root").load(); .option("sql","select * from root").load();
......
...@@ -18,31 +18,59 @@ ...@@ -18,31 +18,59 @@
under the License. under the License.
--> -->
# Grafana
<!-- TOC -->
## Outline
- IoTDB-Grafana
- Grafana installation
- Install Grafana
- Install data source plugin
- Start Grafana
- IoTDB installation
- IoTDB-Grafana installation
- Start IoTDB-Grafana
- Explore in Grafana
- Add data source
- Design in dashboard
<!-- /TOC -->
# IoTDB-Grafana # IoTDB-Grafana
* [中文](https://github.com/apache/incubator-iotdb/blob/master/grafana/readme_zh.md) This project provides a connector which reads data from IoTDB and sends to Grafana(https://grafana.com/). Before you use this tool, make sure Grafana and IoTDB are correctly installed and started.
This project provides a connector which reads data from iotdb and sends to grafana(https://grafana.com/). Before you use this tool, make sure grafana and iotdb are correctly installed and started.
## Grafana installation ## Grafana installation
Download url: https://grafana.com/grafana/download ### Install Grafana
version >= 4.4.1 * Download url: https://grafana.com/grafana/download
* version >= 4.4.1
Grafana will auto start after installing, or you can run `sudo service grafana-server start`
### Install data source plugin ### Install data source plugin
plugin name: simple-json-datasource (https://github.com/grafana/simple-json-datasource) * plugin name: simple-json-datasource
* Download url: https://github.com/grafana/simple-json-datasource
Install SimpleJson plugin: After downloading this plugin, you can use the grafana-cli tool to install SimpleJson from the commandline:
```$xslt
sudo grafana-cli plugins install grafana-simple-json-datasource
sudo service grafana-server restart
``` ```
grafana-cli plugins install grafana-simple-json-datasource
```
Alternatively, you can manually download the .zip file and unpack it into your grafana plugins directory.
* `{grafana-install-directory}\data\plugin\` (Windows)
* `/var/lib/grafana/plugins` (Linux)
* `/usr/local/var/lib/grafana/plugins`(Mac)
### Start Grafana
If you use Unix, Grafana will auto start after installing, or you can run `sudo service grafana-server start` command. See more information [here](http://docs.grafana.org/installation/debian/).
If you use Mac and `homebrew` to install Grafana, you can use `homebrew` to start Grafana.
First make sure homebrew/services is installed by running `brew tap homebrew/services`, then start Grafana using: `brew services start grafana`.
See more information [here](http://docs.grafana.org/installation/mac/).
If you use Windows, start Grafana by executing grafana-server.exe, located in the bin directory, preferably from the command line. See more information [here](http://docs.grafana.org/installation/windows/).
## IoTDB installation ## IoTDB installation
...@@ -56,9 +84,10 @@ mvn clean package -pl grafana -am -Dmaven.test.skip=true ...@@ -56,9 +84,10 @@ mvn clean package -pl grafana -am -Dmaven.test.skip=true
cd grafana cd grafana
``` ```
Copy `application.properties` from `conf/` directory to `target` directory.(Or just make sure that `application.properties` and `iotdb-grafana-{version}.war` are in the same directory.) Copy `application.properties` from `conf/` directory to `target` directory. (Or just make sure that `application.properties` and `iotdb-grafana-{version}.war` are in the same directory.)
Edit `application.properties` Edit `application.properties`
``` ```
# ip and port of IoTDB # ip and port of IoTDB
spring.datasource.url = jdbc:iotdb://127.0.0.1:6667/ spring.datasource.url = jdbc:iotdb://127.0.0.1:6667/
...@@ -94,18 +123,17 @@ $ java -jar iotdb-grafana-{version}.war ...@@ -94,18 +123,17 @@ $ java -jar iotdb-grafana-{version}.war
The default port of Grafana is 3000, see http://localhost:3000 The default port of Grafana is 3000, see http://localhost:3000
Username and password are both "admin". Username and password are both "admin" by default.
### Add data source ### Add data source
Select `Data Sources` and then `Add data source`, select `SimpleJson` in `Type` and `URL` is http://localhost:8888 Select `Data Sources` and then `Add data source`, select `SimpleJson` in `Type` and `URL` is http://localhost:8888
![](./img/add_data_source.png) <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/13203019/51664777-2766ae00-1ff5-11e9-9d2f-7489f8ccbfc2.png">
![](./img/edit_data_source.png)
<img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/13203019/51664842-554bf280-1ff5-11e9-97d2-54eebe0b2ca1.png">
### Design in dashboard ### Design in dashboard
Add diagrams in dashboard and customize your query. See http://docs.grafana.org/guides/getting_started/ Add diagrams in dashboard and customize your query. See http://docs.grafana.org/guides/getting_started/
![](./img/add_graph.png) <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/13203019/51664878-6e54a380-1ff5-11e9-9718-4d0e24627fa8.png">
\ No newline at end of file
...@@ -18,37 +18,68 @@ ...@@ -18,37 +18,68 @@
under the License. under the License.
--> -->
<!-- TOC -->
## 概览
# Grafana安装 - IoTDB-Grafana
- Grafana的安装与部署
- 安装
- simple-json-datasource数据源插件安装
- 启动Grafana
- IoTDB安装
- IoTDB-Grafana连接器安装
- 启动IoTDB-Grafana
- 使用Grafana
- 添加IoTDB数据源
- 操作Grafana
Grafana 官网:https://grafana.com/grafana/download <!-- /TOC -->
版本:4.4.1 # IoTDB-Grafana
选择相应的操作系统下载并安装,安装后会自动启动,手动启动方式: Grafana是开源的指标量监测和可视化工具,可用于展示时序数据和应用程序运行分析。Grafana支持Graphite,InfluxDB等国际主流时序时序数据库作为数据源。在IoTDB项目中,我们开发了Grafana展现IoTDB中时序数据的连接器IoTDB-Grafana,为您提供使用Grafana展示IoTDB数据库中的时序数据的可视化方法。
```$xslt ## Grafana的安装与部署
sudo service grafana-server start
``` ### 安装
* Grafana组件下载地址:https://grafana.com/grafana/download
* 版本 >= 4.4.1
# Simple-json-datasource 插件安装 ### simple-json-datasource数据源插件安装
官网:https://github.com/grafana/simple-json-datasource * 插件名称: simple-json-datasource
* 下载地址: https://github.com/grafana/simple-json-datasource
安装命令: 具体下载方法是:到Grafana的插件目录中:`{Grafana文件目录}\data\plugin\`(Windows系统,启动Grafana后会自动创建`data\plugin`目录)或`/var/lib/grafana/plugins` (Linux系统,plugins目录需要手动创建)或`/usr/local/var/lib/grafana/plugins`(MacOS系统,具体位置参看使用`brew install`安装Grafana后命令行给出的位置提示。
```$xslt 执行下面的命令:
sudo grafana-cli plugins install grafana-simple-json-datasource
sudo service grafana-server restart ```
Shell > git clone https://github.com/grafana/simple-json-datasource.git
``` ```
然后重启Grafana服务器,在浏览器中登录Grafana,在“Add data source”页面中“Type”选项出现“SimpleJson”即为安装成功。
# IoTDB安装 ### 启动Grafana
进入Grafana的安装目录,使用以下命令启动Grafana:
* Windows系统:
```
Shell > bin\grafana-server.exe
```
* Linux系统:
```
Shell > sudo service grafana-server start
```
* MacOS系统:
```
Shell > grafana-server --config=/usr/local/etc/grafana/grafana.ini --homepath /usr/local/share/grafana cfg:default.paths.logs=/usr/local/var/log/grafana cfg:default.paths.data=/usr/local/var/lib/grafana cfg:default.paths.plugins=/usr/local/var/lib/grafana/plugins
```
参考:https://github.com/apache/incubator-iotdb ## IoTDB安装
# 后端数据源连接器安装 参见[https://github.com/apache/incubator-iotdb](https://github.com/apache/incubator-iotdb)
下载源代码 ## IoTDB-Grafana连接器安装
```shell ```shell
git clone https://github.com/apache/incubator-iotdb.git git clone https://github.com/apache/incubator-iotdb.git
...@@ -56,8 +87,10 @@ mvn clean package -pl grafana -am -Dmaven.test.skip=true ...@@ -56,8 +87,10 @@ mvn clean package -pl grafana -am -Dmaven.test.skip=true
cd grafana cd grafana
``` ```
`application.properties`文件从`conf/`目录复制到`target`目录下,并编辑属性值 编译成功后,您需将`application.properties`文件从`conf/`目录复制到`target/`目录下,并在该文件中插入以下(编辑属性值):
``` ```
# ip and port of IoTDB
spring.datasource.url = jdbc:iotdb://127.0.0.1:6667/ spring.datasource.url = jdbc:iotdb://127.0.0.1:6667/
spring.datasource.username = root spring.datasource.username = root
spring.datasource.password = root spring.datasource.password = root
...@@ -65,19 +98,11 @@ spring.datasource.driver-class-name=org.apache.iotdb.jdbc.IoTDBDriver ...@@ -65,19 +98,11 @@ spring.datasource.driver-class-name=org.apache.iotdb.jdbc.IoTDBDriver
server.port = 8888 server.port = 8888
``` ```
采用IoTDB作为后端数据源,前四行定义了数据库的属性,默认端口为6667,用户名和密码都为root,指定数据源驱动的名称。 ### 启动IoTDB-Grafana
编辑server.port的值修改连接器的端口,默认是8888。
# 运行启动
启动数据库,参考:https://github.com/thulab/iotdb
运行后端数据源连接器,在控制台输入
```shell ```shell
$ java -jar iotdb-grafana-{version}.war cd grafana/target/
java -jar iotdb-grafana-{version}.war
. ____ _ __ _ _ . ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
...@@ -88,24 +113,23 @@ $ java -jar iotdb-grafana-{version}.war ...@@ -88,24 +113,23 @@ $ java -jar iotdb-grafana-{version}.war
... ...
``` ```
Grafana的默认端口为 3000,在浏览器中访问 http://localhost:3000 ## 使用Grafana
用户名和密码都为 admin Grafana以网页的dashboard形式为您展示数据,在使用时请您打开浏览器,访问http://\<ip\>:\<port\>
# 添加数据源 注:IP为您的Grafana所在的服务器IP,Port为Grafana的运行端口(默认3000)。默认登录的用户名和密码都是“admin”。
在首页点击左上角的图标,选择`Data Sources`,点击右上角`Add data source`图标,填写`data source`相关配置,在`Config``Type`选择`SimpleJson``Url`填写http://localhost:8888
端口号和数据源连接器的端口号一致,填写完整后选择`Add`,数据源添加成功。 ### 添加IoTDB数据源
![](./img/add_data_source.png) 点击左上角的“Grafana”图标,选择`Data Source`选项,然后再点击`Add data source`。
![](./img/edit_data_source.png) <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/13203019/51664777-2766ae00-1ff5-11e9-9d2f-7489f8ccbfc2.png">
在编辑数据源的时候,`Type`一栏选择`Simplejson`,`URL`一栏填写http://\<ip\>:\<port\>,IP为您的IoTDB-Grafana连接器所在的服务器IP,Port为运行端口(默认8888)。之后确保IoTDB已经启动,点击“Save & Test”,出现“Data Source is working”提示表示配置成功。
<img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/13203019/51664842-554bf280-1ff5-11e9-97d2-54eebe0b2ca1.png">
# 设计并制作仪表板 ### 操作Grafana
在首页点击左上角的图标,选择`Dashboards` - `New`,新建仪表板。在面板中可添加多种类型的图表。
以折线图为例说明添加时序数据的过程: 进入Grafana可视化页面后,可以选择添加时间序列,如图 6.9。您也可以按照Grafana官方文档进行相应的操作,详情可参看Grafana官方文档:http://docs.grafana.org/guides/getting_started/。
选择`Graph`类型,在空白处出现无数据点的图,点击标题选择`Edit`,在图下方出现属性值编辑和查询条件选择区域,在`Metrics`一栏中`Add Query`添加查询,点击`select metric`下拉框中出现IoTDB中所有时序的名称,在右上角选择时间范围,绘制出对应的查询结果。可设置定时刷新,实时展现时序数据。 <img style="width:100%; max-width:800px; max-height:600px; margin-left:auto; margin-right:auto; display:block;" src="https://user-images.githubusercontent.com/13203019/51664878-6e54a380-1ff5-11e9-9718-4d0e24627fa8.png">
![](./img/add_graph.png)
\ No newline at end of file
...@@ -18,5 +18,195 @@ ...@@ -18,5 +18,195 @@
under the License. under the License.
--> -->
# TsFile-Hadoop-Connector
<!-- TOC -->
## Outline
# tsfile-hadoop-connector - TsFile-Hadoop-Connector User Guide
- About TsFile-Hadoop-Connector
- System Requirements
- Data Type Correspondence
- TSFInputFormat Explanation
- Examples
- Read Example: calculate the sum
- Write Example: write the average into Tsfile
<!-- /TOC -->
# TsFile-Hadoop-Connector User Guide
## About TsFile-Hadoop-Connector
TsFile-Hadoop-Connector implements the support of Hadoop for external data sources of Tsfile type. This enables users to read, write and query Tsfile by Hadoop.
With this connector, you can
* load a single TsFile, from either the local file system or hdfs, into Hadoop
* load all files in a specific directory, from either the local file system or hdfs, into hadoop
* write data from Hadoop into TsFile
## System Requirements
|Hadoop Version | Java Version | TsFile Version|
|------------- | ------------ |------------ |
| `2.7.3` | `1.8` | `0.8.0-SNAPSHOT`|
> Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/incubator-iotdb/tree/master/tsfile.
## Data Type Correspondence
| TsFile data type | Hadoop writable |
| ---------------- | --------------- |
| BOOLEAN | BooleanWritable |
| INT32 | IntWritable |
| INT64 | LongWritable |
| FLOAT | FloatWritable |
| DOUBLE | DoubleWritable |
| TEXT | Text |
## TSFInputFormat Explanation
TSFInputFormat extract data from tsfile and format them into records of `MapWritable`.
Supposing that we want to extract data of the device named `d1` which has three sensors named `s1`, `s2`, `s3`.
`s1`'s type is `BOOLEAN`, `s2`'s type is `DOUBLE`, `s3`'s type is `TEXT`.
The `MapWritable` struct will be like:
```
{
"time_stamp": 10000000,
"device_id": d1,
"s1": true,
"s2": 3.14,
"s3": "middle"
}
```
In the Map job of Hadoop, you can get any value you want by key as following:
`mapwritable.get(new Text("s1"))`
> Note: All the keys in `MapWritable` have type of `Text`.
## Examples
### Read Example: calculate the sum
First of all, we should tell InputFormat what kind of data we want from tsfile.
```
// configure reading time enable
TSFInputFormat.setReadTime(job, true);
// configure reading deviceId enable
TSFInputFormat.setReadDeviceId(job, true);
// configure reading which deltaObjectIds
String[] deviceIds = {"device_1"};
TSFInputFormat.setReadDeviceIds(job, deltaObjectIds);
// configure reading which measurementIds
String[] measurementIds = {"sensor_1", "sensor_2", "sensor_3"};
TSFInputFormat.setReadMeasurementIds(job, measurementIds);
```
And then,the output key and value of mapper and reducer should be specified
```
// set inputformat and outputformat
job.setInputFormatClass(TSFInputFormat.class);
// set mapper output key and value
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(DoubleWritable.class);
// set reducer output key and value
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DoubleWritable.class);
```
Then, the `mapper` and `reducer` class is how you deal with the `MapWritable` produced by `TSFInputFormat` class.
```
public static class TSMapper extends Mapper<NullWritable, MapWritable, Text, DoubleWritable> {
@Override
protected void map(NullWritable key, MapWritable value,
Mapper<NullWritable, MapWritable, Text, DoubleWritable>.Context context)
throws IOException, InterruptedException {
Text deltaObjectId = (Text) value.get(new Text("device_id"));
context.write(deltaObjectId, (DoubleWritable) value.get(new Text("sensor_3")));
}
}
public static class TSReducer extends Reducer<Text, DoubleWritable, Text, DoubleWritable> {
@Override
protected void reduce(Text key, Iterable<DoubleWritable> values,
Reducer<Text, DoubleWritable, Text, DoubleWritable>.Context context)
throws IOException, InterruptedException {
double sum = 0;
for (DoubleWritable value : values) {
sum = sum + value.get();
}
context.write(key, new DoubleWritable(sum));
}
}
```
> Note: For the complete code, please see the following link: https://github.com/apache/incubator-iotdb/blob/master/example/hadoop/src/main/java/org/apache/iotdb//hadoop/tsfile/TSFMRReadExample.java
### Write Example: write the average into Tsfile
Except for the `OutputFormatClass`, the rest of configuration code for hadoop map-reduce job is almost same as above.
```
job.setOutputFormatClass(TSFOutputFormat.class);
// set reducer output key and value
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(HDFSTSRecord.class);
```
Then, the `mapper` and `reducer` class is how you deal with the `MapWritable` produced by `TSFInputFormat` class.
```
public static class TSMapper extends Mapper<NullWritable, MapWritable, Text, MapWritable> {
@Override
protected void map(NullWritable key, MapWritable value,
Mapper<NullWritable, MapWritable, Text, MapWritable>.Context context)
throws IOException, InterruptedException {
Text deltaObjectId = (Text) value.get(new Text("device_id"));
long timestamp = ((LongWritable)value.get(new Text("timestamp"))).get();
if (timestamp % 100000 == 0) {
context.write(deltaObjectId, new MapWritable(value));
}
}
}
/**
* This reducer calculate the average value.
*/
public static class TSReducer extends Reducer<Text, MapWritable, NullWritable, HDFSTSRecord> {
@Override
protected void reduce(Text key, Iterable<MapWritable> values,
Reducer<Text, MapWritable, NullWritable, HDFSTSRecord>.Context context) throws IOException, InterruptedException {
long sensor1_value_sum = 0;
long sensor2_value_sum = 0;
double sensor3_value_sum = 0;
long num = 0;
for (MapWritable value : values) {
num++;
sensor1_value_sum += ((LongWritable)value.get(new Text("sensor_1"))).get();
sensor2_value_sum += ((LongWritable)value.get(new Text("sensor_2"))).get();
sensor3_value_sum += ((DoubleWritable)value.get(new Text("sensor_3"))).get();
}
HDFSTSRecord tsRecord = new HDFSTSRecord(1L, key.toString());
DataPoint dPoint1 = new LongDataPoint("sensor_1", sensor1_value_sum / num);
DataPoint dPoint2 = new LongDataPoint("sensor_2", sensor2_value_sum / num);
DataPoint dPoint3 = new DoubleDataPoint("sensor_3", sensor3_value_sum / num);
tsRecord.addTuple(dPoint1);
tsRecord.addTuple(dPoint2);
tsRecord.addTuple(dPoint3);
context.write(NullWritable.get(), tsRecord);
}
}
```
> Note: For the complete code, please see the following link: https://github.com/apache/incubator-iotdb/blob/master/example/hadoop/src/main/java/org/apache/iotdb//hadoop/tsfile/TSMRWriteExample.java
\ No newline at end of file
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
--> -->
# Usage ## Usage
## Dependencies ## Dependencies
...@@ -29,12 +29,16 @@ ...@@ -29,12 +29,16 @@
## How to package only jdbc project ## How to package only jdbc project
In root directory: In root directory:
> mvn clean package -pl jdbc -am -Dmaven.test.skip=true ```
mvn clean package -pl jdbc -am -Dmaven.test.skip=true
```
## How to install in local maven repository ## How to install in local maven repository
In root directory: In root directory:
> mvn clean install -pl jdbc -am -Dmaven.test.skip=true ```
mvn clean install -pl jdbc -am -Dmaven.test.skip=true
```
## Using IoTDB JDBC with Maven ## Using IoTDB JDBC with Maven
...@@ -55,10 +59,13 @@ This chapter provides an example of how to open a database connection, execute a ...@@ -55,10 +59,13 @@ This chapter provides an example of how to open a database connection, execute a
Requires that you include the packages containing the JDBC classes needed for database programming. Requires that you include the packages containing the JDBC classes needed for database programming.
**NOTE: For faster insertion, the insertBatch() in Session is recommended.**
```Java ```Java
import java.sql.*; import java.sql.*;
import org.apache.iotdb.jdbc.IoTDBSQLException; import org.apache.iotdb.jdbc.IoTDBSQLException;
public class JDBCExample {
/** /**
* Before executing a SQL statement with a Statement object, you need to create a Statement object using the createStatement() method of the Connection object. * Before executing a SQL statement with a Statement object, you need to create a Statement object using the createStatement() method of the Connection object.
* After creating a Statement object, you can use its execute() method to execute a SQL statement * After creating a Statement object, you can use its execute() method to execute a SQL statement
...@@ -87,63 +94,62 @@ import org.apache.iotdb.jdbc.IoTDBSQLException; ...@@ -87,63 +94,62 @@ import org.apache.iotdb.jdbc.IoTDBSQLException;
//Create time series //Create time series
//Different data type has different encoding methods. Here use INT32 as an example //Different data type has different encoding methods. Here use INT32 as an example
try { try {
statement.execute("CREATE TIMESERIES root.demo.d0.s0 WITH DATATYPE=INT32,ENCODING=RLE;"); statement.execute("CREATE TIMESERIES root.demo.s0 WITH DATATYPE=INT32,ENCODING=RLE;");
}catch (IoTDBSQLException e){ }catch (IoTDBSQLException e){
System.out.println(e.getMessage()); System.out.println(e.getMessage());
} }
//Show time series //Show time series
statement.execute("SHOW TIMESERIES root.demo.d0"); statement.execute("SHOW TIMESERIES root.demo");
outputResult(statement.getResultSet()); outputResult(statement.getResultSet());
//Show devices //Show devices
statement.execute("SHOW DEVICES"); statement.execute("SHOW DEVICES");
outputResult(statement.getResultSet()); outputResult(statement.getResultSet());
//Count time series by given prefix //Count time series
statement.execute("COUNT TIMESERIES root.demo.d0"); statement.execute("COUNT TIMESERIES root");
outputResult(statement.getResultSet()); outputResult(statement.getResultSet());
//Count nodes at the given level (level count from root and start with 0) of //Count nodes at the given level
//the specified path prefix statement.execute("COUNT NODES root LEVEL=3");
statement.execute("COUNT NODES root.demo LEVEL=2");
outputResult(statement.getResultSet()); outputResult(statement.getResultSet());
//Count timeseries at the given level (level count from root and start with 0) group //Count timeseries group by each node at the given level
//by each node under the spceified path prefix statement.execute("COUNT TIMESERIES root GROUP BY LEVEL=3");
statement.execute("COUNT TIMESERIES root.demo GROUP BY LEVEL=2");
outputResult(statement.getResultSet()); outputResult(statement.getResultSet());
//Execute insert statements in batch //Execute insert statements in batch
statement.addBatch("insert into root.demo.d0(timestamp,s0) values(1,1);"); statement.addBatch("insert into root.demo(timestamp,s0) values(1,1);");
statement.addBatch("insert into root.demo.d0(timestamp,s0) values(1,1);"); statement.addBatch("insert into root.demo(timestamp,s0) values(1,1);");
statement.addBatch("insert into root.demo.d0(timestamp,s0) values(2,15);"); statement.addBatch("insert into root.demo(timestamp,s0) values(2,15);");
statement.addBatch("insert into root.demo.d0(timestamp,s0) values(2,17);"); statement.addBatch("insert into root.demo(timestamp,s0) values(2,17);");
statement.addBatch("insert into root.demo.d0(timestamp,s0) values(4,12);"); statement.addBatch("insert into root.demo(timestamp,s0) values(4,12);");
statement.executeBatch(); statement.executeBatch();
statement.clearBatch(); statement.clearBatch();
//Full query statement //Full query statement
String sql = "select * from root.demo.d0"; String sql = "select * from root.demo";
ResultSet resultSet = statement.executeQuery(sql); ResultSet resultSet = statement.executeQuery(sql);
System.out.println("sql: " + sql); System.out.println("sql: " + sql);
outputResult(resultSet); outputResult(resultSet);
//Exact query statement //Exact query statement
sql = "select s0 from root.demo.d0 where time = 4;"; sql = "select s0 from root.demo where time = 4;";
resultSet= statement.executeQuery(sql); resultSet= statement.executeQuery(sql);
System.out.println("sql: " + sql); System.out.println("sql: " + sql);
outputResult(resultSet); outputResult(resultSet);
//Time range query //Time range query
sql = "select s0 from root.demo.d0 where time >= 2 and time < 5;"; sql = "select s0 from root.demo where time >= 2 and time < 5;";
resultSet = statement.executeQuery(sql); resultSet = statement.executeQuery(sql);
System.out.println("sql: " + sql); System.out.println("sql: " + sql);
outputResult(resultSet); outputResult(resultSet);
//Aggregate query //Aggregate query
sql = "select count(s0) from root.demo.d0;"; sql = "select count(s0) from root.demo;";
resultSet = statement.executeQuery(sql); resultSet = statement.executeQuery(sql);
System.out.println("sql: " + sql); System.out.println("sql: " + sql);
outputResult(resultSet); outputResult(resultSet);
//Delete time series //Delete time series
statement.execute("delete timeseries root.demo.d0.s0"); statement.execute("delete timeseries root.demo.s0");
//close connection //close connection
statement.close(); statement.close();
...@@ -197,4 +203,48 @@ import org.apache.iotdb.jdbc.IoTDBSQLException; ...@@ -197,4 +203,48 @@ import org.apache.iotdb.jdbc.IoTDBSQLException;
System.out.println("--------------------------\n"); System.out.println("--------------------------\n");
} }
} }
}
```
## Status Code
**Status Code** is introduced in the latest version. For example, as IoTDB requires registering the time series first before writing data, a kind of solution is:
```
try {
writeData();
} catch (SQLException e) {
// the most case is that the time series does not exist
if (e.getMessage().contains("exist")) {
//However, using the content of the error message is not so efficient
registerTimeSeries();
//write data once again
writeData();
}
}
``` ```
With Status Code, instead of writing codes like `if (e.getErrorMessage().contains("exist"))`, we can simply use `e.getErrorCode() == TSStatusCode.TIME_SERIES_NOT_EXIST_ERROR.getStatusCode()`.
Here is a list of Status Code and related message:
|Status Code|Status Type|Meaning|
|:---|:---|:---|
|200|SUCCESS_STATUS||
|201|STILL_EXECUTING_STATUS||
|202|INVALID_HANDLE_STATUS||
|301|TIMESERIES_NOT_EXIST_ERROR|Timeseries does not exist|
|302|UNSUPPORTED_FETCH_METADATA_OPERATION_ERROR|Unsupported fetch metadata operation|
|303|FETCH_METADATA_ERROR|Failed to fetch metadata|
|304|CHECK_FILE_LEVEL_ERROR|Meet error while checking file level|
|400|EXECUTE_STATEMENT_ERROR|Execute statement error|
|401|SQL_PARSE_ERROR|Meet error while parsing SQL|
|402|GENERATE_TIME_ZONE_ERROR|Meet error while generating time zone|
|403|SET_TIME_ZONE_ERROR|Meet error while setting time zone|
|404|NOT_A_STORAGE_GROUP_ERROR|Operating object is not a storage group|
|405|READ_ONLY_SYSTEM_ERROR|Operating system is read only|
|500|INTERNAL_SERVER_ERROR|Internal server error|
|600|WRONG_LOGIN_PASSWORD_ERROR|Username or password is wrong|
|601|NOT_LOGIN_ERROR|Has not logged in|
|602|NO_PERMISSION_ERROR|No permissions for this operation|
|603|UNINITIALIZED_AUTH_ERROR|Uninitialized authorizer|
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
under the License. under the License.
--> -->
# Spark IoTDB Connecter
## version ## version
The versions required for Spark and Java are as follow: The versions required for Spark and Java are as follow:
...@@ -47,9 +48,9 @@ mvn clean scala:compile compile install ...@@ -47,9 +48,9 @@ mvn clean scala:compile compile install
``` ```
spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNAPSHOT-jar-with-dependencies.jar spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNAPSHOT-jar-with-dependencies.jar
import org.apache.iotdb.sparkdb._ import org.apache.iotdb.spark.db._
val df = spark.read.format("org.apache.iotdb.sparkdb").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root").load val df = spark.read.format("org.apache.iotdb.spark.db").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root").load
df.printSchema() df.printSchema()
...@@ -60,9 +61,9 @@ df.show() ...@@ -60,9 +61,9 @@ df.show()
``` ```
spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNAPSHOT-jar-with-dependencies.jar spark-shell --jars spark-iotdb-connector-0.9.0-SNAPSHOT.jar,iotdb-jdbc-0.9.0-SNAPSHOT-jar-with-dependencies.jar
import org.apache.iotdb.sparkdb._ import org.apache.iotdb.spark.db._
val df = spark.read.format("org.apache.iotdb.sparkdb").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root"). val df = spark.read.format("org.apache.iotdb.spark.db").option("url","jdbc:iotdb://127.0.0.1:6667/").option("sql","select * from root").
option("lowerBound", [lower bound of time that you want query(include)]).option("upperBound", [upper bound of time that you want query(include)]). option("lowerBound", [lower bound of time that you want query(include)]).option("upperBound", [upper bound of time that you want query(include)]).
option("numPartition", [the partition number you want]).load option("numPartition", [the partition number you want]).load
...@@ -127,15 +128,15 @@ You can also use narrow table form which as follows: (You can see part 4 about h ...@@ -127,15 +128,15 @@ You can also use narrow table form which as follows: (You can see part 4 about h
## from wide to narrow ## from wide to narrow
``` ```
import org.apache.iotdb.sparkdb._ import org.apache.iotdb.spark.db._
val wide_df = spark.read.format("org.apache.iotdb.sparkdb").option("url", "jdbc:iotdb://127.0.0.1:6667/").option("sql", "select * from root where time < 1100 and time > 1000").load val wide_df = spark.read.format("org.apache.iotdb.spark.db").option("url", "jdbc:iotdb://127.0.0.1:6667/").option("sql", "select * from root where time < 1100 and time > 1000").load
val narrow_df = Transformer.toNarrowForm(spark, wide_df) val narrow_df = Transformer.toNarrowForm(spark, wide_df)
``` ```
## from narrow to wide ## from narrow to wide
``` ```
import org.apache.iotdb.sparkdb._ import org.apache.iotdb.spark.db._
val wide_df = Transformer.toWideForm(spark, narrow_df) val wide_df = Transformer.toWideForm(spark, narrow_df)
``` ```
...@@ -145,7 +146,7 @@ val wide_df = Transformer.toWideForm(spark, narrow_df) ...@@ -145,7 +146,7 @@ val wide_df = Transformer.toWideForm(spark, narrow_df)
import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row; import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession; import org.apache.spark.sql.SparkSession;
import org.apache.iotdb.sparkdb.* import org.apache.iotdb.spark.db.*
public class Example { public class Example {
...@@ -156,7 +157,7 @@ public class Example { ...@@ -156,7 +157,7 @@ public class Example {
.master("local[*]") .master("local[*]")
.getOrCreate(); .getOrCreate();
Dataset<Row> df = spark.read().format("org.apache.iotdb.sparkdb") Dataset<Row> df = spark.read().format("org.apache.iotdb.spark.db")
.option("url","jdbc:iotdb://127.0.0.1:6667/") .option("url","jdbc:iotdb://127.0.0.1:6667/")
.option("sql","select * from root").load(); .option("sql","select * from root").load();
......
...@@ -18,35 +18,8 @@ ...@@ -18,35 +18,8 @@
under the License. under the License.
--> -->
<!-- TOC -->
- [TsFile-Spark-Connector User Guide](#tsfile-spark-connector-user-guide)
- [1. About TsFile-Spark-Connector](#1-about-tsfile-spark-connector)
- [2. System Requirements](#2-system-requirements)
- [3. Quick Start](#3-quick-start)
- [Local Mode](#local-mode)
- [Distributed Mode](#distributed-mode)
- [4. Data Type Correspondence](#4-data-type-correspondence)
- [5. Schema Inference](#5-schema-inference)
- [6. Scala API](#6-scala-api)
- [Example 1: read from the local file system](#example-1-read-from-the-local-file-system)
- [Example 2: read from the hadoop file system](#example-2-read-from-the-hadoop-file-system)
- [Example 3: read from a specific directory](#example-3-read-from-a-specific-directory)
- [Example 4: query](#example-4-query)
- [Example 5: write](#example-5-write)
- [Appendix A: Old Design of Schema Inference](#appendix-a-old-design-of-schema-inference)
- [the default way](#the-default-way)
- [unfolding delta_object column](#unfolding-delta_object-column)
- [Appendix B: Old Note](#appendix-b-old-note)
<!-- /TOC -->
<a id="tsfile-spark-connector-user-guide"></a>
# TsFile-Spark-Connector User Guide # TsFile-Spark-Connector User Guide
<a id="1-about-tsfile-spark-connector"></a>
## 1. About TsFile-Spark-Connector ## 1. About TsFile-Spark-Connector
TsFile-Spark-Connector implements the support of Spark for external data sources of Tsfile type. This enables users to read, write and query Tsfile by Spark. TsFile-Spark-Connector implements the support of Spark for external data sources of Tsfile type. This enables users to read, write and query Tsfile by Spark.
...@@ -56,7 +29,6 @@ With this connector, you can ...@@ -56,7 +29,6 @@ With this connector, you can
* load all files in a specific directory, from either the local file system or hdfs, into Spark * load all files in a specific directory, from either the local file system or hdfs, into Spark
* write data from Spark into TsFile * write data from Spark into TsFile
<a id="2-system-requirements"></a>
## 2. System Requirements ## 2. System Requirements
|Spark Version | Scala Version | Java Version | TsFile | |Spark Version | Scala Version | Java Version | TsFile |
...@@ -65,11 +37,7 @@ With this connector, you can ...@@ -65,11 +37,7 @@ With this connector, you can
> Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/incubator-iotdb/tree/master/tsfile. > Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/incubator-iotdb/tree/master/tsfile.
> Note: Openjdk may not be competible with scala. Use oracle jdk instead.
<a id="3-quick-start"></a>
## 3. Quick Start ## 3. Quick Start
<a id="local-mode"></a>
### Local Mode ### Local Mode
Start Spark with TsFile-Spark-Connector in local mode: Start Spark with TsFile-Spark-Connector in local mode:
...@@ -85,7 +53,6 @@ Note: ...@@ -85,7 +53,6 @@ Note:
* See https://github.com/apache/incubator-iotdb/tree/master/tsfile for how to get TsFile. * See https://github.com/apache/incubator-iotdb/tree/master/tsfile for how to get TsFile.
<a id="distributed-mode"></a>
### Distributed Mode ### Distributed Mode
Start Spark with TsFile-Spark-Connector in distributed mode (That is, the spark cluster is connected by spark-shell): Start Spark with TsFile-Spark-Connector in distributed mode (That is, the spark cluster is connected by spark-shell):
...@@ -100,7 +67,6 @@ Note: ...@@ -100,7 +67,6 @@ Note:
* Multiple jar packages are separated by commas without any spaces. * Multiple jar packages are separated by commas without any spaces.
* See https://github.com/apache/incubator-iotdb/tree/master/tsfile for how to get TsFile. * See https://github.com/apache/incubator-iotdb/tree/master/tsfile for how to get TsFile.
<a id="4-data-type-correspondence"></a>
## 4. Data Type Correspondence ## 4. Data Type Correspondence
| TsFile data type | SparkSQL data type| | TsFile data type | SparkSQL data type|
...@@ -112,7 +78,6 @@ Note: ...@@ -112,7 +78,6 @@ Note:
| DOUBLE | DoubleType | | DOUBLE | DoubleType |
| TEXT | StringType | | TEXT | StringType |
<a id="5-schema-inference"></a>
## 5. Schema Inference ## 5. Schema Inference
The way to display TsFile is dependent on the schema. Take the following TsFile structure as an example: There are three Measurements in the TsFile schema: status, temperature, and hardware. The basic information of these three measurements is as follows: The way to display TsFile is dependent on the schema. Take the following TsFile structure as an example: There are three Measurements in the TsFile schema: status, temperature, and hardware. The basic information of these three measurements is as follows:
...@@ -152,31 +117,47 @@ The corresponding SparkSQL table is as follows: ...@@ -152,31 +117,47 @@ The corresponding SparkSQL table is as follows:
| 5 | null | null | null | null | false | null | | 5 | null | null | null | null | false | null |
| 6 | null | null | ccc | null | null | null | | 6 | null | null | ccc | null | null | null |
You can also use narrow table form which as follows: (You can see part 6 about how to use narrow form)
| time | device_name | status | hardware | temperature |
|------|-------------------------------|--------------------------|----------------------------|-------------------------------|
| 1 | root.ln.wf02.wt01 | true | null | 2.2 |
| 1 | root.ln.wf02.wt02 | true | null | null |
| 2 | root.ln.wf02.wt01 | null | null | 2.2 |
| 2 | root.ln.wf02.wt02 | false | aaa | null |
| 3 | root.ln.wf02.wt01 | true | null | 2.1 |
| 4 | root.ln.wf02.wt02 | true | bbb | null |
| 5 | root.ln.wf02.wt01 | false | null | null |
| 6 | root.ln.wf02.wt02 | null | ccc | null |
<a id="6-scala-api"></a>
## 6. Scala API ## 6. Scala API
NOTE: Remember to assign necessary read and write permissions in advance. NOTE: Remember to assign necessary read and write permissions in advance.
<a id="example-1-read-from-the-local-file-system"></a>
### Example 1: read from the local file system ### Example 1: read from the local file system
```scala ```scala
import org.apache.iotdb.tsfile._ import org.apache.iotdb.tsfile._
val df = spark.read.tsfile("test.tsfile") val wide_df = spark.read.tsfile("test.tsfile")
df.show wide_df.show
val narrow_df = spark.read.tsfile("test.tsfile", true)
narrow_df.show
``` ```
<a id="example-2-read-from-the-hadoop-file-system"></a>
### Example 2: read from the hadoop file system ### Example 2: read from the hadoop file system
```scala ```scala
import org.apache.iotdb.tsfile._ import org.apache.iotdb.tsfile._
val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile") val wide_df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile")
df.show wide_df.show
val narrow_df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
narrow_df.show
``` ```
<a id="example-3-read-from-a-specific-directory"></a>
### Example 3: read from a specific directory ### Example 3: read from a specific directory
```scala ```scala
...@@ -189,8 +170,7 @@ Note 1: Global time ordering of all TsFiles in a directory is not supported now. ...@@ -189,8 +170,7 @@ Note 1: Global time ordering of all TsFiles in a directory is not supported now.
Note 2: Measurements of the same name should have the same schema. Note 2: Measurements of the same name should have the same schema.
<a id="example-4-query"></a> ### Example 4: query in wide form
### Example 4: query
```scala ```scala
import org.apache.iotdb.tsfile._ import org.apache.iotdb.tsfile._
...@@ -208,11 +188,28 @@ val newDf = spark.sql("select count(*) from tsfile_table") ...@@ -208,11 +188,28 @@ val newDf = spark.sql("select count(*) from tsfile_table")
newDf.show newDf.show
``` ```
<a id="example-5-write"></a> ### Example 5: query in narrow form
### Example 5: write ```scala
import org.apache.iotdb.tsfile._
val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
df.createOrReplaceTempView("tsfile_table")
val newDf = spark.sql("select * from tsfile_table where device_name = 'root.ln.wf02.wt02' and temperature > 5")
newDf.show
```
```scala ```scala
import org.apache.iotdb.tsfile._ import org.apache.iotdb.tsfile._
val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
df.createOrReplaceTempView("tsfile_table")
val newDf = spark.sql("select count(*) from tsfile_table")
newDf.show
```
### Example 6: write in wide form
```scala
// we only support wide_form table to write
import org.apache.iotdb.tsfile._
val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile") val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile")
df.show df.show
...@@ -222,8 +219,21 @@ val newDf = spark.read.tsfile("hdfs://localhost:9000/output") ...@@ -222,8 +219,21 @@ val newDf = spark.read.tsfile("hdfs://localhost:9000/output")
newDf.show newDf.show
``` ```
## Example 6: write in narrow form
```scala
// we only support wide_form table to write
import org.apache.iotdb.tsfile._
val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
df.show
df.write.tsfile("hdfs://localhost:9000/output", true)
val newDf = spark.read.tsfile("hdfs://localhost:9000/output", true)
newDf.show
```
<a id="appendix-a-old-design-of-schema-inference"></a>
## Appendix A: Old Design of Schema Inference ## Appendix A: Old Design of Schema Inference
The way to display TsFile is related to TsFile Schema. Take the following TsFile structure as an example: There are three Measurements in the Schema of TsFile: status, temperature, and hardware. The basic info of these three Measurements is as follows: The way to display TsFile is related to TsFile Schema. Take the following TsFile structure as an example: There are three Measurements in the Schema of TsFile: status, temperature, and hardware. The basic info of these three Measurements is as follows:
...@@ -255,7 +265,6 @@ The existing data in the file is as follows: ...@@ -255,7 +265,6 @@ The existing data in the file is as follows:
There are two ways to show it out: There are two ways to show it out:
<a id="the-default-way"></a>
#### the default way #### the default way
Two columns will be created to store the full path of the device: time(LongType) and delta_object(StringType). Two columns will be created to store the full path of the device: time(LongType) and delta_object(StringType).
...@@ -291,7 +300,6 @@ Next, a column is created for each Measurement to store the specific data. The S ...@@ -291,7 +300,6 @@ Next, a column is created for each Measurement to store the specific data. The S
</center> </center>
<a id="unfolding-delta_object-column"></a>
#### unfolding delta_object column #### unfolding delta_object column
Expand the device column by "." into multiple columns, ignoring the root directory "root". Convenient for richer aggregation operations. If the user wants to use this display way, the parameter "delta\_object\_name" needs to be set in the table creation statement (refer to Example 5 in Section 5.1 of this manual), as in this example, parameter "delta\_object\_name" is set to "root.device.turbine". The number of path layers needs to be one-to-one. At this point, one column is created for each layer of the device path except the "root" layer. The column name is the name in the parameter and the value is the name of the corresponding layer of the device. Next, one column will be created for each Measurement to store the specific data. Expand the device column by "." into multiple columns, ignoring the root directory "root". Convenient for richer aggregation operations. If the user wants to use this display way, the parameter "delta\_object\_name" needs to be set in the table creation statement (refer to Example 5 in Section 5.1 of this manual), as in this example, parameter "delta\_object\_name" is set to "root.device.turbine". The number of path layers needs to be one-to-one. At this point, one column is created for each layer of the device path except the "root" layer. The column name is the name in the parameter and the value is the name of the corresponding layer of the device. Next, one column will be created for each Measurement to store the specific data.
...@@ -328,6 +336,5 @@ TsFile-Spark-Connector can display one or more TsFiles as a table in SparkSQL By ...@@ -328,6 +336,5 @@ TsFile-Spark-Connector can display one or more TsFiles as a table in SparkSQL By
The writing process is to write a DataFrame as one or more TsFiles. By default, two columns need to be included: time and delta_object. The rest of the columns are used as Measurement. If user wants to write the second table structure back to TsFile, user can set the "delta\_object\_name" parameter(refer to Section 5.1 of Section 5.1 of this manual). The writing process is to write a DataFrame as one or more TsFiles. By default, two columns need to be included: time and delta_object. The rest of the columns are used as Measurement. If user wants to write the second table structure back to TsFile, user can set the "delta\_object\_name" parameter(refer to Section 5.1 of Section 5.1 of this manual).
<a id="appendix-b-old-note"></a>
## Appendix B: Old Note ## Appendix B: Old Note
NOTE: Check the jar packages in the root directory of your Spark and replace libthrift-0.9.2.jar and libfb303-0.9.2.jar with libthrift-0.9.1.jar and libfb303-0.9.1.jar respectively. NOTE: Check the jar packages in the root directory of your Spark and replace libthrift-0.9.2.jar and libfb303-0.9.2.jar with libthrift-0.9.1.jar and libfb303-0.9.1.jar respectively.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册