diff --git a/README.md b/README.md index 8e911e875efcecaffd313a3d2f2871cc43eac0ee..e21e3276f5c36242687edb4805db85bbd632c71f 100644 --- a/README.md +++ b/README.md @@ -125,7 +125,7 @@ configuration files are under "conf" folder * system config module (`tsfile-format.properties`, `iotdb-engine.properties`) * log config module (`logback.xml`). -For more, see [Chapter4: Deployment and Management](https://iotdb.apache.org/#/Documents/0.8.0/chap4/sec1) in detail. +For more, see [Chapter3: Server](https://iotdb.apache.org/#/Documents/progress/chap3/sec1) and [Chapter4: Client](https://iotdb.apache.org/#/Documents/progress/chap4/sec1) in detail. ## Start diff --git a/docs/Documentation-CHN/UserGuide/0-QuickStart/1-QuickStart.md b/docs/Documentation-CHN/UserGuide/0-Get Started/1-QuickStart.md similarity index 85% rename from docs/Documentation-CHN/UserGuide/0-QuickStart/1-QuickStart.md rename to docs/Documentation-CHN/UserGuide/0-Get Started/1-QuickStart.md index f3e4314832faa26f1fc9b39a623e7919fb1a0126..af2dd6ad69af02420414ad2006e07addc5639cf8 100755 --- a/docs/Documentation-CHN/UserGuide/0-QuickStart/1-QuickStart.md +++ b/docs/Documentation-CHN/UserGuide/0-Get Started/1-QuickStart.md @@ -41,14 +41,12 @@ # 快速入门 -本文将介绍关于IoTDB使用的基本流程,如果需要更多信息,请浏览我们官网的[指引](https://iotdb.apache.org/#/Documents/0.8.0/chap1/sec1). +本文将介绍关于IoTDB使用的基本流程,如果需要更多信息,请浏览我们官网的[指引](https://iotdb.apache.org/#/Documents/progress/chap1/sec1). ## 安装环境 安装前需要保证设备上配有JDK>=1.8的运行环境,并配置好JAVA_HOME环境变量。 -如需从源码进行编译,还需要Maven>=3.1的运行环境。 - 设置最大文件打开数为65535。 ## IoTDB安装 @@ -61,23 +59,9 @@ IoTDB支持多种安装途径。用户可以使用三种方式对IoTDB进行安 * 使用Docker镜像: dockerfile文件位于 https://github.com/apache/incubator-iotdb/blob/master/docker/Dockerfile -### 从源代码生成 - -您可以从代码仓库下载源码: - -``` -git clone https://github.com/apache/incubator-iotdb.git -``` +### IoTDB下载 -接下来在incubator-iotdb的根目录下执行以下命令: - -``` -> mvn clean package -DskipTests -``` - -执行完成后对应的二进制文件(包括服务器和客户端)都可以在**distribution/target/apache-iotdb-{project.version}-incubating-bin.zip**位置找到。 - -> 注意需要在源代码根目录添加目录"service-rpc/target/generated-sources/thrift"与"server/target/generated-sources/antlr3",以避免在IDE中的编译错误。 +您可以从这里下载程序:[下载](https://iotdb.apache.org/#/Download) ### 配置文件 @@ -87,7 +71,7 @@ git clone https://github.com/apache/incubator-iotdb.git * 系统配置模块 (`tsfile-format.properties`, `iotdb-engine.properties`) * 日志配置模块 (`logback.xml`). -想要了解更多,请浏览[Chapter3: Deployment](https://iotdb.apache.org/#/Documents/progress/chap3/sec1) +想要了解更多,请浏览[Chapter3: Server](https://iotdb.apache.org/#/Documents/progress/chap3/sec1) ## IoTDB试用 @@ -280,7 +264,7 @@ IoTDB> quit IoTDB> exit ``` -想要浏览更多IoTDB数据库支持的命令,请浏览[IoTDB SQL Documentation](https://iotdb.apache.org/#/Documents/progress/chap4/sec7). +想要浏览更多IoTDB数据库支持的命令,请浏览[SQL Reference](https://iotdb.apache.org/#/Documents/progress/chap5/sec4). ### 停止IoTDB @@ -297,23 +281,3 @@ Windows系统停止命令如下: ``` > $sbin\stop-server.bat ``` - -##单独打包服务器 - -在incubator-iotdb的根目录下执行 - -``` -> mvn clean package -pl server -am -DskipTests -``` -在生成完毕之后,IoTDB服务器位于文件夹"server/target/iotdb-server-{project.version}"中。 - - -##单独打包客户端 - -在incubator-iotdb的根目录下执行 - -``` -> mvn clean package -pl client -am -DskipTests -``` - -在生成完毕之后,IoTDB的cli工具位于文件夹"client/target/iotdb-client-{project.version}"中。 \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/0-QuickStart/2-Frequently asked questions.md b/docs/Documentation-CHN/UserGuide/0-Get Started/2-Frequently asked questions.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/0-QuickStart/2-Frequently asked questions.md rename to docs/Documentation-CHN/UserGuide/0-Get Started/2-Frequently asked questions.md diff --git a/docs/Documentation-CHN/UserGuide/0-QuickStart/3-Reference.md b/docs/Documentation-CHN/UserGuide/0-Get Started/3-Publication.md similarity index 95% rename from docs/Documentation-CHN/UserGuide/0-QuickStart/3-Reference.md rename to docs/Documentation-CHN/UserGuide/0-Get Started/3-Publication.md index c0c1ca5298e706230de521e2980b0c9841e61d2a..90cab6331ab93f267278a4a2091eaf3e74feda7f 100644 --- a/docs/Documentation-CHN/UserGuide/0-QuickStart/3-Reference.md +++ b/docs/Documentation-CHN/UserGuide/0-Get Started/3-Publication.md @@ -28,4 +28,4 @@ Apache IoTDB 始于清华大学软件学院。IoTDB是一个用于管理大量 * [PISA: An Index for Aggregating Big Time Series Data](https://dl.acm.org/citation.cfm?id=2983775&dl=ACM&coll=DL), Xiangdong Huang and Jianmin Wang and Raymond K. Wong and Jinrui Zhang and Chen Wang. CIKM 2016. * [Matching Consecutive Subpatterns over Streaming Time Series](https://link.springer.com/chapter/10.1007/978-3-319-96893-3_8), Rong Kang and Chen Wang and Peng Wang and Yuting Ding and Jianmin Wang. APWeb/WAIM 2018. * [KV-match: A Subsequence Matching Approach Supporting Normalization and Time Warping](https://www.semanticscholar.org/paper/KV-match%3A-A-Subsequence-Matching-Approach-and-Time-Wu-Wang/9ed84cb15b7e5052028fc5b4d667248713ac8592), Jiaye Wu and Peng Wang and Chen Wang and Wei Wang and Jianmin Wang. ICDE 2019. -* 我们还研发了面向时间序列数据库的Benchmark工具: https://github.com/thulab/iotdb-benchmark +* 我们还研发了面向时间序列数据库的Benchmark工具: [https://github.com/thulab/iotdb-benchmark](https://github.com/thulab/iotdb-benchmark) diff --git a/docs/Documentation-CHN/UserGuide/1-Overview/2-Architecture.md b/docs/Documentation-CHN/UserGuide/1-Overview/2-Architecture.md index a4594be20a6c0fa21fea209c25eb6727033e708b..169370b6380ba9d2378ae4b4c7a156e544291f50 100644 --- a/docs/Documentation-CHN/UserGuide/1-Overview/2-Architecture.md +++ b/docs/Documentation-CHN/UserGuide/1-Overview/2-Architecture.md @@ -27,7 +27,7 @@ IoTDB套件由若干个组件构成,共同形成“数据收集-数据写入- 图1.1展示了使用IoTDB套件全部组件后形成的整体应用架构。下文称所有组件形成IoTDB套件,而IoTDB特指其中的时间序列数据库组件。 - + 在图1.1中,用户可以通过JDBC将来自设备上传感器采集的时序数据、服务器负载和CPU内存等系统状态数据、消息队列中的时序数据、应用程序的时序数据或者其他数据库中的时序数据导入到本地或者远程的IoTDB中。用户还可以将上述数据直接写成本地(或位于HDFS上)的TsFile文件。 diff --git a/docs/Documentation-CHN/UserGuide/1-Overview/4-Features.md b/docs/Documentation-CHN/UserGuide/1-Overview/4-Features.md index 6ddfeb10ca2218f010c621cffd8bef14fc622271..63bf4070f11e88c32d83f7a40ea926ea8f4f0f4b 100644 --- a/docs/Documentation-CHN/UserGuide/1-Overview/4-Features.md +++ b/docs/Documentation-CHN/UserGuide/1-Overview/4-Features.md @@ -40,12 +40,12 @@ IoTDB具有以下特点: * 以及同时具备上述特点的混合负载 * 面向时间序列的丰富查询语义 * 跨设备、跨传感器的时间序列时间对齐 - * 面向时序数据特征的计算(频域变换,0.7.0版本不支持) + * 面向时序数据特征的计算 * 提供面向时间维度的丰富聚合函数支持 * 极低的学习门槛 * 支持类SQL的数据操作 * 提供JDBC的编程接口 - * 完善的导入导出工具(0.7.0版本不支持) + * 完善的导入导出工具 * 完美对接开源生态环境 * 支持开源数据分析生态系统:Hadoop、Spark * 支持开源可视化工具对接:Grafana diff --git a/docs/Documentation-CHN/UserGuide/2-Concept/1-Key Concepts and Terminology.md b/docs/Documentation-CHN/UserGuide/2-Concept/1-Data Model and Terminology.md similarity index 83% rename from docs/Documentation-CHN/UserGuide/2-Concept/1-Key Concepts and Terminology.md rename to docs/Documentation-CHN/UserGuide/2-Concept/1-Data Model and Terminology.md index 7fb5bcdc7ebd9ce8cf9688223265916fdc495696..00012ef98060acca061d20be0f519df27793136f 100644 --- a/docs/Documentation-CHN/UserGuide/2-Concept/1-Key Concepts and Terminology.md +++ b/docs/Documentation-CHN/UserGuide/2-Concept/1-Data Model and Terminology.md @@ -20,9 +20,21 @@ --> # 第2章 IoTDB基本概念 -## 主要概念及术语 +## 数据模型与技术 -IoTDB中涉及如下基本概念: +我们为您提供一份简化的[样例数据](https://github.com/apache/incubator-iotdb/blob/master/docs/Documentation/OtherMaterial-Sample%20Data.txt)。 + +下载文件: [IoTDB-SampleData.txt](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt). + +根据本文描述的[数据](https://github.com/apache/incubator-iotdb/blob/master/docs/Documentation/OtherMaterial-Sample%20Data.txt)属性层级,按照属性涵盖范围以及它们之间的从属关系,我们可将其表示为如下图2.1的属性层级组织结构,其层级关系为:集团层-电场层-设备层-传感器层。其中ROOT为根节点,传感器层的每一个节点称为叶子节点。在使用IoTDB的过程中,您可以直接将由ROOT节点到每一个叶子节点路径上的属性用“.”连接,将其作为一个IoTDB的时间序列的名称。图2.1中最左侧的路径可以生成一个名为`ROOT.ln.wf01.wt01.status`的时间序列。 + +
+ +**图2.1 属性层级组织结构**
+ +得到时间序列的名称之后,我们需要根据数据的实际场景和规模设置存储组。由于在本文所述场景中,每次到达的数据通常以集团为单位(即数据可能为跨电场、跨设备的),为了写入数据时避免频繁切换IO降低系统速度,且满足用户以集团为单位进行物理隔离数据的要求,我们将存储组设置在集团层。 + +根据模型结构,IoTDB中涉及如下基本概念: * 设备 @@ -206,14 +218,3 @@ IoTDB在显示时间戳时可以支持LONG类型以及DATETIME-DISPLAY类型, ``` > 注意:'+'和'—'的左右两边必须有空格 -* 值 - -一个时间序列的值是由实际中的传感器向IoTDB发送的数值。这个值可以按照数据类型被IoTDB存储,同时用户也可以针对这个值的数据类型选择压缩方式,以及对应的编码方式。数据类型与对应编码的详细信息请参见本文[数据类型](/#/Documents/0.8.0/chap2/sec2)与[编码方式](/#/Documents/latest/chap2/sec3)节。 - -* 数据点 - -一个数据点是由一个时间戳-值对(timestamp, value)组成的。 - -* 数据的列 - -一个数据的列包含属于一个时间序列的所有值以及这些值相对应的时间戳。当有多个数据的列时,IoTDB会针对时间戳做合并,变为多个<时间戳-多值>对(timestamp, value, value, …)。 diff --git a/docs/Documentation-CHN/UserGuide/2-Concept/2-Data Type.md b/docs/Documentation-CHN/UserGuide/2-Concept/2-Data Type.md index ec8cba53badd696af7b63490b608dc4851a875c2..b5acca4a57b62bfc20aa0f5410eb6d8bffa38045 100644 --- a/docs/Documentation-CHN/UserGuide/2-Concept/2-Data Type.md +++ b/docs/Documentation-CHN/UserGuide/2-Concept/2-Data Type.md @@ -32,7 +32,7 @@ IoTDB支持: 一共六种数据类型。 -其中**FLOAT**与**DOUBLE**类型的序列,如果编码方式采用[RLE](/#/Documents/progress/chap2/sec3)或[TS_2DIFF](/#/Documents/progress/chap2/sec3)可以指定MAX_POINT_NUMBER,该项为浮点数的小数点后位数,具体指定方式请参见本文[第4.7节](/#/Documents/progress/chap4/sec7),若不指定则系统会根据配置文件`tsfile-format.properties`文件中的[float_precision项](/#/Documents/progress/chap3/sec2)配置。 +其中**FLOAT**与**DOUBLE**类型的序列,如果编码方式采用[RLE](/#/Documents/progress/chap2/sec3)或[TS_2DIFF](/#/Documents/progress/chap2/sec3)可以指定MAX_POINT_NUMBER,该项为浮点数的小数点后位数,具体指定方式请参见本文[第5.4节](/#/Documents/progress/chap5/sec4),若不指定则系统会根据配置文件`tsfile-format.properties`文件中的[float_precision项](/#/Documents/progress/chap3/sec4)配置。 当系统中用户输入的数据类型与该时间序列的数据类型不对应时,系统会提醒类型错误,如下所示,二阶差分不支持布尔类型的编码: diff --git a/docs/Documentation-CHN/UserGuide/2-Concept/3-Encoding.md b/docs/Documentation-CHN/UserGuide/2-Concept/3-Encoding.md index 28dca2fee4fbb49e193cb1ec5e983dcefbd16dee..230904a12382e760f962f1d06d6c74478a97663a 100644 --- a/docs/Documentation-CHN/UserGuide/2-Concept/3-Encoding.md +++ b/docs/Documentation-CHN/UserGuide/2-Concept/3-Encoding.md @@ -33,13 +33,13 @@ PLAIN编码,默认的编码方式,即不编码,支持多种数据类型, 二阶差分编码,比较适合编码单调递增或者递减的序列数据,不适合编码波动较大的数据。 -二阶差分编码也可用于对浮点数进行编码,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文[第5.1节](/#/Documents/0.8.0/chap5/sec1))。比较适合存储某些浮点数值连续出现、单调调递增或者递减的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。 +二阶差分编码也可用于对浮点数进行编码,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文[第5.4节](/#/Documents/progress/chap5/sec4))。比较适合存储某些浮点数值连续出现、单调调递增或者递减的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。 * 游程编码(RLE) 游程编码,比较适合存储某些整数值连续出现的序列,不适合编码大部分情况下前后值不一样的序列数据。 -游程编码也可用于对浮点数进行编码,,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文本文[第5.1节](/#/Documents/0.8.0/chap5/sec1))。比较适合存储某些浮点数值连续出现的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。 +游程编码也可用于对浮点数进行编码,,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文本文[第5.4节](/#/Documents/progress/chap5/sec4))。比较适合存储某些浮点数值连续出现的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。 * GORILLA编码(GORILLA) diff --git a/docs/Documentation-CHN/UserGuide/2-Concept/4-Compression.md b/docs/Documentation-CHN/UserGuide/2-Concept/4-Compression.md index 1c08500044b0e6f93331fe68a40701191cf087a0..e3467d90d7486d4e6209fab4a4c10e0f257ebbb6 100644 --- a/docs/Documentation-CHN/UserGuide/2-Concept/4-Compression.md +++ b/docs/Documentation-CHN/UserGuide/2-Concept/4-Compression.md @@ -25,4 +25,9 @@ 当时间序列写入并按照指定的类型编码为二进制数据后,IoTDB会使用压缩技术对该数据进行压缩,进一步提升空间存储效率。虽然编码和压缩都旨在提升存储效率,但编码技术通常只适合特定的数据类型(如二阶差分编码只适合与INT32或者INT64编码,存储浮点数需要先将他们乘以10m以转换为整数),然后将它们转换为二进制流。压缩方式(SNAPPY)针对二进制流进行压缩,因此压缩方式的使用不再受数据类型的限制。 -IoTDB允许在创建一个时间序列的时候指定该列的压缩方式。现阶段IoTDB现在支持的压缩方式有两种:UNCOMPRESSOR(不压缩)和SNAPPY压缩。压缩方式的指定语法详见本文[4.7节](/#/Documents/progress/chap4/sec7)。 +IoTDB允许在创建一个时间序列的时候指定该列的压缩方式。现阶段IoTDB现在支持的压缩方式有两种: + +* UNCOMPRESSOR(不压缩) +* SNAPPY压缩 + +压缩方式的指定语法详见本文[5.4节](/#/Documents/progress/chap5/sec4)。 diff --git a/docs/Documentation-CHN/UserGuide/3-Deployment/1-Deployment.md b/docs/Documentation-CHN/UserGuide/3-Server/1-Download.md similarity index 93% rename from docs/Documentation-CHN/UserGuide/3-Deployment/1-Deployment.md rename to docs/Documentation-CHN/UserGuide/3-Server/1-Download.md index a69437f5c35867427c5be4c6bbc30d68e21048d3..c4a1e219c406f4d516b63a2898428a0f3ca33dc6 100644 --- a/docs/Documentation-CHN/UserGuide/3-Deployment/1-Deployment.md +++ b/docs/Documentation-CHN/UserGuide/3-Server/1-Download.md @@ -19,9 +19,9 @@ --> -# 第3章 系统部署 +# 第3章 服务器端 -## 系统部署 +## 下载 IoTDB为您提供了两种安装方式,您可以参考下面的建议,任选其中一种: @@ -35,8 +35,9 @@ IoTDB为您提供了两种安装方式,您可以参考下面的建议,任选 如果您需要从源码进行编译,还需要安装: -1. Maven>=3.0的运行环境,具体安装方法可以参考以下链接:[https://maven.apache.org/install.html](https://maven.apache.org/install.html)。 +1. Maven>=3.1的运行环境,具体安装方法可以参考以下链接:[https://maven.apache.org/install.html](https://maven.apache.org/install.html)。 +> 注: 也可以选择不安装,使用我们提供的'mvnw.sh' 或 'mvnw.cmd' 工具。使用时请用'mvnw.sh' 或 'mvnw.cmd'命令代替下文的'mvn'命令。 ### 从官网下载二进制可执行文件 diff --git a/docs/Documentation-CHN/UserGuide/3-Deployment/4-TsFile library Installation.md b/docs/Documentation-CHN/UserGuide/3-Server/2-Single Node Setup.md similarity index 95% rename from docs/Documentation-CHN/UserGuide/3-Deployment/4-TsFile library Installation.md rename to docs/Documentation-CHN/UserGuide/3-Server/2-Single Node Setup.md index 100a72fb59f8220bcd15af6c878ce2af4e19b685..be153fce55dcf00a69178b9de2d3eb79cf49cb0b 100644 --- a/docs/Documentation-CHN/UserGuide/3-Deployment/4-TsFile library Installation.md +++ b/docs/Documentation-CHN/UserGuide/3-Server/2-Single Node Setup.md @@ -18,6 +18,6 @@ under the License. --> -# 第3章: 系统部署 +# 第3章: 服务器端 Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/6-API/3-Python API.md b/docs/Documentation-CHN/UserGuide/3-Server/3-Cluster Setup.md similarity index 95% rename from docs/Documentation-CHN/UserGuide/6-API/3-Python API.md rename to docs/Documentation-CHN/UserGuide/3-Server/3-Cluster Setup.md index 82d0f29588861d3c9391187b0dc0e1c062b7ea76..be153fce55dcf00a69178b9de2d3eb79cf49cb0b 100644 --- a/docs/Documentation-CHN/UserGuide/6-API/3-Python API.md +++ b/docs/Documentation-CHN/UserGuide/3-Server/3-Cluster Setup.md @@ -18,6 +18,6 @@ under the License. --> -# 第6章: API +# 第3章: 服务器端 Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/3-Deployment/2-Configuration.md b/docs/Documentation-CHN/UserGuide/3-Server/4-Config Manual.md similarity index 99% rename from docs/Documentation-CHN/UserGuide/3-Deployment/2-Configuration.md rename to docs/Documentation-CHN/UserGuide/3-Server/4-Config Manual.md index e0a5ce6522152a2f61f71ef22d9c029a028d65d7..eb918a760962c89528e6372c630576a5cf4e29e9 100644 --- a/docs/Documentation-CHN/UserGuide/3-Deployment/2-Configuration.md +++ b/docs/Documentation-CHN/UserGuide/3-Server/4-Config Manual.md @@ -19,7 +19,7 @@ --> -# 第3章 系统部署 +# 第3章 服务器端 ## 系统配置 diff --git a/docs/Documentation-CHN/UserGuide/7-System Design/1-Hierarchy.md b/docs/Documentation-CHN/UserGuide/3-Server/5-Docker Image.md similarity index 95% rename from docs/Documentation-CHN/UserGuide/7-System Design/1-Hierarchy.md rename to docs/Documentation-CHN/UserGuide/3-Server/5-Docker Image.md index 77409d34ddf6562e9ebc55299f4542645608a649..c5b8c36ab9e32ef57acd231d1845958a60c5d19d 100644 --- a/docs/Documentation-CHN/UserGuide/7-System Design/1-Hierarchy.md +++ b/docs/Documentation-CHN/UserGuide/3-Server/5-Docker Image.md @@ -19,6 +19,6 @@ --> -# 第8章: TsFile +# 第3章 服务器端 Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/1-Cli Shell Tool.md b/docs/Documentation-CHN/UserGuide/4-Client/1-Command Line Interface(CLI).md similarity index 94% rename from docs/Documentation-CHN/UserGuide/4-Operation Manual/1-Cli Shell Tool.md rename to docs/Documentation-CHN/UserGuide/4-Client/1-Command Line Interface(CLI).md index 5dad70024b48ebac016d266d656414ae22e50f5c..e06bf6348775503b2cfd6c1426f8e1929be18b35 100644 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/1-Cli Shell Tool.md +++ b/docs/Documentation-CHN/UserGuide/4-Client/1-Command Line Interface(CLI).md @@ -20,19 +20,29 @@ --> -# 第4章 IoTDB操作指南 +# 第4章 客户端 ## 概览 - Cli / Shell工具 + - Cli / Shell安装 - Cli / Shell运行方式 - Cli / Shell运行参数 - Cli / Shell的-e参数 -# Cli / Shell工具 +# Command Line Interface(CLI) IOTDB为用户提供Client/Shell工具用于启动客户端和服务端程序。下面介绍每个Client/Shell工具的运行方式和相关参数。 > \$IOTDB\_HOME表示IoTDB的安装目录所在路径。 +## Cli / Shell安装 +在incubator-iotdb的根目录下执行 + +``` +> mvn clean package -pl client -am -DskipTests +``` + +在生成完毕之后,IoTDB的cli工具位于文件夹"client/target/iotdb-client-{project.version}"中。 + ## Cli / Shell运行方式 安装后的IoTDB中有一个默认用户:`root`,默认密码为`root`。用户可以使用该用户尝试运行IoTDB客户端以测试服务器是否正常启动。客户端启动脚本为$IOTDB_HOME/bin文件夹下的`start-client`脚本。启动脚本时需要指定运行IP和PORT。以下为服务器在本机启动,且用户未更改运行端口号的示例,默认端口为6667。若用户尝试连接远程服务器或更改了服务器运行的端口号,请在-h和-p项处使用服务器的IP和PORT。 diff --git a/docs/Documentation-CHN/UserGuide/6-API/1-JDBC API.md b/docs/Documentation-CHN/UserGuide/4-Client/2-Programming - JDBC.md similarity index 98% rename from docs/Documentation-CHN/UserGuide/6-API/1-JDBC API.md rename to docs/Documentation-CHN/UserGuide/4-Client/2-Programming - JDBC.md index f58f91354faafe4caa062fce05594d8646da362c..a4d8d9e1cbd583be6d43f48339a4653c53c92104 100644 --- a/docs/Documentation-CHN/UserGuide/6-API/1-JDBC API.md +++ b/docs/Documentation-CHN/UserGuide/4-Client/2-Programming - JDBC.md @@ -19,9 +19,9 @@ --> -# 第6章: API +# 第4章: 客户端 -## JDBC API +## 编程 - JDBC Coming soon. diff --git a/docs/Documentation-CHN/UserGuide/6-API/2-Session API.md b/docs/Documentation-CHN/UserGuide/4-Client/3-Programming - Session.md similarity index 97% rename from docs/Documentation-CHN/UserGuide/6-API/2-Session API.md rename to docs/Documentation-CHN/UserGuide/4-Client/3-Programming - Session.md index 8f73384576b3ff6a981d1553ddb31193831f451d..919af6e84bf5fe19002ecc105676dcc17b059035 100644 --- a/docs/Documentation-CHN/UserGuide/6-API/2-Session API.md +++ b/docs/Documentation-CHN/UserGuide/4-Client/3-Programming - Session.md @@ -19,9 +19,9 @@ --> -# 第6章: API +# 第4章: 客户端 -# Session API +# 编程 - Session ## 依赖 diff --git a/docs/Documentation-CHN/UserGuide/4-Client/4-Programming - Other Language.md b/docs/Documentation-CHN/UserGuide/4-Client/4-Programming - Other Language.md new file mode 100644 index 0000000000000000000000000000000000000000..506d497fcfeeef7e99b5b93074f2be6386756e4e --- /dev/null +++ b/docs/Documentation-CHN/UserGuide/4-Client/4-Programming - Other Language.md @@ -0,0 +1,24 @@ + +# 第4章: 客户端 +## 其他语言 + +Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/4-Spark IoTDB Connector.md b/docs/Documentation-CHN/UserGuide/4-Client/5-Programming - TsFile API (TimeSeries File Format).md similarity index 94% rename from docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/4-Spark IoTDB Connector.md rename to docs/Documentation-CHN/UserGuide/4-Client/5-Programming - TsFile API (TimeSeries File Format).md index da945df06a807a44744d5df80db04adb724a8f0a..26b47baa04c74a71dd10fcfaabd5b2d78bb2d8c8 100644 --- a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/4-Spark IoTDB Connector.md +++ b/docs/Documentation-CHN/UserGuide/4-Client/5-Programming - TsFile API (TimeSeries File Format).md @@ -7,9 +7,9 @@ to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at - + http://www.apache.org/licenses/LICENSE-2.0 - + Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY @@ -18,6 +18,6 @@ under the License. --> -# 第10章: 生态集成 -Coming Soon. \ No newline at end of file +# 第4章: 客户端 +## TsFile API \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/3-Data Import.md b/docs/Documentation-CHN/UserGuide/4-Operation Manual/3-Data Import.md deleted file mode 100644 index cad355569e184aea3b66187f0bd0cef6204e3eaa..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/3-Data Import.md +++ /dev/null @@ -1,85 +0,0 @@ - - -# 第4章 IoTDB操作指南 - -## 数据接入 -### 历史数据导入 - -0.7.0版本中暂不支持此功能。 - -### 实时数据接入 - -IoTDB为用户提供多种插入实时数据的方式,例如在[Cli/Shell工具](/#/Documents/progress/chap4/sec1)中直接输入插入数据的[INSERT语句](/#/Documents/progress/chap4/sec7),或使用Java API(标准[Java JDBC](/#/Documents/progress/chap6/sec1)接口)单条或批量执行插入数据的[INSERT语句](/#/Documents/progress/chap4/sec7)。 - -本节主要为您介绍实时数据接入的[INSERT语句](/#/Documents/progress/chap4/sec7)在场景中的实际使用示例,有关INSERT SQL语句的详细语法请参见本文[INSERT语句](/#/Documents/progress/chap4/sec7)节。 - -#### 使用INSERT语句 -使用[INSERT语句](/#/Documents/progress/chap4/sec7)可以向指定的已经创建的一条或多条时间序列中插入数据。对于每一条数据,均由一个时间戳类型的[时间戳](/#/Documents/progress/chap2/sec1)和一个[数值或布尔值、字符串类型](/#/Documents/progress/chap2/sec2)的传感器采集值组成。 - -在本节的场景实例下,以其中的两个时间序列`root.ln.wf02.wt02.status`和`root.ln.wf02.wt02.hardware`为例 ,它们的数据类型分别为BOOLEAN和TEXT。 - -单列数据插入示例代码如下: -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) -IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1, "v1") -``` - -以上示例代码将长整型的timestamp以及值为true的数据插入到时间序列`root.ln.wf02.wt02.status`中和将长整型的timestamp以及值为”v1”的数据插入到时间序列`root.ln.wf02.wt02.hardware`中。执行成功后会返回执行时间,代表数据插入已完成。 - -> 注意:在IoTDB中,TEXT类型的数据单双引号都可以来表示,上面的插入语句是用的是双引号表示TEXT类型数据,下面的示例将使用单引号表示TEXT类型数据。 - -INSERT语句还可以支持在同一个时间点下多列数据的插入,同时向2时间点插入上述两个时间序列的值,多列数据插入示例代码如下: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp, status, hardware) VALUES (2, false, 'v2') -``` - -插入数据后我们可以使用SELECT语句简单查询已插入的数据。 - -``` -IoTDB > select * from root.ln.wf02 where time < 3 -``` - -结果如图所示。由查询结果可以看出,单列、多列数据的插入操作正确执行。 -
- -### INSERT语句的错误处理 - -若用户向一个不存在的时间序列中插入数据,例如执行以下命令: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp, temperature) values(1,"v1") -``` - -由于`root.ln.wf02.wt02. temperature`时间序列不存在,系统将会返回以下ERROR告知该Timeseries路径不存在: - -``` -Msg: Current deviceId[root.ln.wf02.wt02] does not contains measurement:temperature -``` -若用户插入的数据类型与该Timeseries对应的数据类型不一致,例如执行以下命令: -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1,100) -``` -系统将会返回以下ERROR告知数据类型有误: -``` -error: The TEXT data type should be covered by " or ' -``` diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/4-Data Query.md b/docs/Documentation-CHN/UserGuide/4-Operation Manual/4-Data Query.md deleted file mode 100644 index df55ac9660374f12a7e7c28f041060e2cb4316fb..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/4-Data Query.md +++ /dev/null @@ -1,503 +0,0 @@ - - -# 第4章 IoTDB操作指南 - -## 数据查询 -### 时间切片查询 - -本章节主要介绍时间切片查询的相关示例,主要使用的是[IoTDB SELECT语句](/#/Documents/progress/chap4/sec7)。同时,您也可以使用[Java JDBC](/#/Documents/progress/chap6/sec1)标准接口来执行相关的查询语句。 - -#### 根据一个时间区间选择一列数据 - -SQL语句为: - -``` -select temperature from root.ln.wf01.wt01 where time < 2017-11-01T00:08:00.000 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为温度传感器(temperature);该语句要求选择出该设备在“2017-11-01T00:08:00.000”(此处可以使用多种时间格式,详情可参看[2.1节](/#/Documents/progress/chap2/sec1))时间点以前的所有温度传感器的值。 - -该SQL语句的执行结果如下: - -
- -#### 根据一个时间区间选择多列数据 - -SQL语句为: - -``` -select status, temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000; -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为供电状态(status)和温度传感器(temperature);该语句要求选择出“2017-11-01T00:05:00.000”至“2017-11-01T00:12:00.000”之间的所选时间序列的值。 - -该SQL语句的执行结果如下: - -
- -#### 按照多个时间区间选择同一设备的多列数据 - -IoTDB支持在一次查询中指定多个时间区间条件,用户可以根据需求随意组合时间区间条件。例如, - -SQL语句为: - -``` -select status,temperature from root.ln.wf01.wt01 where (time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000) or (time >= 2017-11-01T16:35:00.000 and time <= 2017-11-01T16:37:00.000); -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为“供电状态(status)”和“温度传感器(temperature)”;该语句指定了两个不同的时间区间,分别为“2017-11-01T00:05:00.000至2017-11-01T00:12:00.000”和“2017-11-01T16:35:00.000至2017-11-01T16:37:00.000”;该语句要求选择出满足任一时间区间的被选时间序列的值。 - -该SQL语句的执行结果如下: -
- - -#### 按照多个时间区间选择不同设备的多列数据 - -该系统支持在一次查询中选择任意列的数据,也就是说,被选择的列可以来源于不同的设备。例如,SQL语句为: - -``` -select wf01.wt01.status,wf02.wt02.hardware from root.ln where (time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000) or (time >= 2017-11-01T16:35:00.000 and time <= 2017-11-01T16:37:00.000); -``` -其含义为: - -被选择的时间序列为“ln集团wf01子站wt01设备的供电状态”以及“ln集团wf02子站wt02设备的硬件版本”;该语句指定了两个时间区间,分别为“2017-11-01T00:05:00.000至2017-11-01T00:12:00.000”和“2017-11-01T16:35:00.000至2017-11-01T16:37:00.000”;该语句要求选择出满足任意时间区间的被选时间序列的值。 - -该SQL语句的执行结果如下: -
- -### 降频聚合查询 - -本章节主要介绍降频聚合查询的相关示例,主要使用的是IoTDB SELECT语句的[GROUP BY子句](/#/Documents/progress/chap4/sec7),该子句是IoTDB中用于根据用户给定划分条件对结果集进行划分,并对已划分的结果集进行聚合计算的语句。IoTDB支持根据时间间隔对结果集进行划分,默认结果按照时间升序排列。同时,您也可以使用[Java JDBC](/#/Documents/progress/chap6/sec1)标准接口来执行相关的查询语句。 - -GROUP BY语句为用户提供三类指定参数: - -* 参数1:划分时间轴的时间间隔参数 -* 参数2:时间轴划分原点参数(可选参数) -* 参数3:时间轴显示时间窗参数(一个或多个) - - -三类参数的实际含义如下图3.2所示。其中时间轴划分原点参数为可选参数,下面我们将给出指定划分原点、不指定划分原点、指定时间过滤条件三种较为典型的降频聚合的例子。 - -
- -**图3.2 三类参数实际含义**
- -#### 不指定时间轴划分原点的降频聚合 - -SQL语句为: - -``` -select count(status), max_value(temperature) from root.ln.wf01.wt01 group by (1d, [2017-11-01T00:00:00, 2017-11-07T23:00:00]); -``` -其含义为: - -由于用户没有指定时间轴划分原点参数,GROUP BY语句将默认以1970年1月1日0点(+0时区)为划分原点。 - -以上GROUP BY语句的第一个参数是划分时间轴的时间间隔,以该参数(1d)为时间间隔,默认原点为划分原点,将时间轴划分为若干个连续的区间,这些区间为[0,1d],[1d,2d],[2d,3d]…。 - -以上GROUP BY语句的第二个参数是显示时间段参数,该参数决定了最终展示的结果范围为[2017-11-01T00:00:00, 2017-11-07T23:00:00]。 - -之后系统会将和where子句中的时间及值过滤条件与GROUP BY语句的第二个参数一同作为数据的过滤条件,得到满足过滤条件要求的数据(在此例子中为[2017-11-01T00:00:00, 2017-11-07T23:00:00]范围内的数据),将这些数据映射到前面的已经切分好的时间轴中(在此例子中,从2017-11-01T00:00:00开始至2017-11-07T23:00:00结束,每隔1天的时间段中均有映射进来的数据)。 - -由于要显示的结果范围中每个时间段均有数据,该SQL语句的执行结果如图: - -
- -#### 指定时间轴划分原点的降频聚合 - -SQL语句为: - -``` -select count(status), max_value(temperature) from root.ln.wf01.wt01 group by (1d, 2017-11-03 00:00:00, [2017-11-01 00:00:00, 2017-11-07 23:00:00]); -``` - -其含义为: - -由于用户指定了时间轴划分原点参数(第二个参数)为2017-11-03 00:00:00,GROUP BY语句将默认以2017年11月03日0点(系统默认时区)为划分原点。 - -以上GROUP BY语句的第一个参数是划分时间轴的时间间隔,以该参数(1d)为时间间隔,默认原点为划分原点,将时间轴划分为若干个连续的区间,这些区间为[2017-11-02T00:00:00, 2017-11-03T00:00:00], [2017-11-03T00:00:00, 2017-11-04T00:00:00]等。 - -以上GROUP BY语句的第三个参数是显示时间段参数,该参数决定了最终展示的结果范围为[2017-11-01T00:00:00, 2017-11-07T23:00:00]。 - -之后系统会将和where子句中的时间及值过滤条件与GROUP BY语句的第三个参数一同作为数据的过滤条件,得到满足过滤条件要求的数据(在此例子中为[2017-11-01T00:00:00, 2017-11-07T23:00:00]范围内的数据),将这些数据映射到前面的已经切分好的时间轴中(在此例子中,从2017-11-01T00:00:00开始至2017-11-07T23:00:00结束,每隔1天的时间段中均有映射进来的数据)。 - -由于要显示的结果范围中每个时间段均有数据,该SQL语句的执行结果如图: - -
- -#### 指定时间过滤条件的降频聚合 - -SQL语句为: - -``` -select count(status), max_value(temperature) from root.ln.wf01.wt01 where time > 2017-11-03T06:00:00 and temperature > 20 group by(1h, [2017-11-03T00:00:00, 2017-11-03T23:00:00]); -``` -其含义为: - -由于用户没有指定时间轴划分原点参数,GROUP BY语句将默认以1970年1月1日0点(+0时区)为划分原点。 - -以上GROUP BY语句的第一个参数是划分时间轴的时间间隔,以该参数(1h)为时间间隔,默认原点为划分原点,将时间轴划分为若干个连续的区间,这些区间为[2017-11-03T00:00:00, 2017-11-03T01:00:00], [2017-11-03T01:00:00, 2017-11-03T02:00:00]等。 - -以上GROUP BY语句的第二个参数是显示时间段参数,该参数决定了最终展示的结果范围为[2017-11-03T00:00:00, 2017-11-03T23:00:00]。 - -之后系统会将和where子句中的时间及值过滤条件与GROUP BY语句的第二个参数一同作为数据的过滤条件,得到满足过滤条件要求的数据(在此例子中为[2017-11-03T00:06:00, 2017-11-07T23:00:00]范围内,且root.ln.wf01.wt01.temperature > 20的数据),将这些数据映射到前面的已经切分好的时间轴中(在此例子中,从2017-11-03T00:06:00开始至2017-11-03T23:00:00结束,每隔1小时的时间段中均有映射进来的数据)。 - -由于要显示的结果范围中[2017-11-03T00:00:00, 2017-11-03T00:06:00]时段没有数据,该段区间的聚合结果会显示null,其余时间段均有数据,该SQL语句的执行结果如图: - -
- -需要注意的是,GROUP BY语句中SELECT后面的路径必须全部为聚合函数,否则系统会给出相应的错误提示。如图所示。 - -
- -### 查询结果自动补值 - -在IoTDB实际使用中,做时间序列的查询操作时,会出现在某些时刻数值为空值的情况,这样的情况会影响使用者进行进一步的分析。为了更好的反映数据的变化程度,用户希望能够对缺失值进行自动填补,因此,IoTDB系统引入了自动补值(Fill)功能。 - -自动补值功能是指在针对单列或多列的时间序列查询中,根据用户的指定方法以及有效时间范围填充空值,若查询的点有值则自动补值功能不生效。 - -> 注:当前0.7.0版本中IoTDB为用户提供使用前一个数值填充(Previous)和使用线性拟合填充(Linear)两种方法。且填充仅可用在对某一个时间点进行查询传感器数值结果为空的情况。 - -#### 填充方法 -* Previous方法 - -当查询时间戳的值为空值时,用查询时间戳的前一个时间戳的值进行补值。 形式化的Previous自动补值方法如下所示: -``` -select from where time = fill([previous, ], …) -``` - -其中各个参数含义如下: - -
**表格3-4 Previous方法参数列表** - -|参数名|含义| -|:---|:---| -|path, prefixPath|查询路径,必选字段。| -|T|查询时间戳(该查询时间戳只能指定一个),必选字段。| -|data\_type|填充方法作用的数据类型。可选值为:int32, int64, float, double, boolean, text。可选字段。| -|before\_range|表示Previous填充方法的有效时间范围。当 [T-before\_range, T]范围内有数值时,Previous方法才能进行填充。当未指定before\_range的值时,before\_range为默认值T。可选字段。| -
- -在此我们给出一个使用Previous方法填充空值的示例,SQL语句如下所示: - -``` -select temperature from root.sgcc.wf03.wt01 where time = 2017-11-01T16:37:50.000 fill(float[previous, 1m]) -``` -该语句的含义为: - -由于2017-11-01T16:37:50.000时刻,时间序列`root.sgcc.wf03.wt01.temperature`结果为空值,系统采用2017-11-01T16:37:50.000时刻的前一个时间戳(且该时间戳在[2017-11-01T16:36:50.000, 2017-11-01T16:37:50.000]时间范围内)的值进行补值并显示。 - -在本文的[样例数据集](/#/Documents/progress/chap3/sec1)上,该语句的执行结果如图所示: -
- -值得说明的是,如果在填充指定的有效时间内没有数值,则系统不会进行填充,返回为空,如图所示: -
- -* Linear方法 - -当查询时间戳的值为空值时,用查询时间戳的前一个时间戳的值和后一个时间戳的值进行线性补值。 形式化的Linear自动补值方法如下所示: -``` -select from where time = fill([linear, , ]…) -``` -其中各个参数含义如下: - -
**表格3-5 Linear方法参数列表** - -|参数名|含义| -|:---|:---| -|path, prefixPath|查询路径,必选字段。| -|T|查询时间戳(该查询时间戳只能指定一个),必选字段。| -|data\_type|填充方法作用的数据类型。可选值为:int32, int64, float, double, boolean, text。可选字段。| -|before\_range, after\_range|表示Linear填充方法的有效时间范围。当 [T-before\_range, T+after\_range]范围内有数值时,Linear方法才能进行填充。当未指定before\_range的值时,before\_range为默认值T。可选字段。| -|before\_range|表示Linear填充方法的前置有效时间范围。当 [T-before\_range, T+after\_range]范围内前后都有数值时,Linear方法才能进行填充。当before\_range及after\_range未显式指定时,before\_range与after\_range皆默认为无穷大。| -
- -在此我们给出一个使用Linear方法填充空值的示例,SQL语句如下所示: - -``` -select temperature from root.sgcc.wf03.wt01 where time = 2017-11-01T16:37:50.000 fill(float [linear, 1m, 1m]) -``` -该语句的含义为: - -由于2017-11-01T16:37:50.000时刻,时间序列`root.sgcc.wf03.wt01.temperature`结果为空值,系统采用2017-11-01T16:37:50.000时刻的前一个时间戳(且该时间戳在[2017-11-01T16:36:50.000, 2017-11-01T16:37:50.000]时间范围内)2017-11-01T16:37:00.000及其值21.927326、2017-11-01T16:37:50.000时刻的后一个时间戳(且该时间戳在[2017-11-01T16:37:50.000, 2017-11-01T16:38:50.000]时间范围内)2017-11-01T16:39:00.000及其值25.311783进行线性计算得到结果为21.927326 + (25.311783-21.927326)/60s*50s = 24.747707。 - -在本文的[样例数据集](/#/Documents/progress/chap3/sec1)上,该语句的执行结果如图所示: -
- -#### 数据类型与填充方法对应关系 -数据类型及支持的填充方式如表格3-6所示: - -
**表格3-6 数据类型及支持的填充方式** - -|数据类型|支持的填充方式| -|:---|:---| -|boolean|previous| -|int32|previous, linear| -|int64|previous, linear| -|float|previous, linear| -|double|previous, linear| -|text|previous| -
- -需要注意的是,对数据类型不支持的fill方式,IoTDB系统会给出错误提示,如下图: - -
- -在不指定填充方式时,各类型有其自己默认的填充方式以及参数,对应关系如表格3-7: - -
**Table 3-7 各种数据类型的默认Fill方式** - -|数据类型|默认Fill方式| -|:---|:---| -|boolean|previous, 0| -|int32|linear, 0, 0| -|int64|linear, 0, 0| -|float|linear, 0, 0| -|double|linear, 0, 0| -|text|previous, 0| -
- -> 注意: 0.7.0版本中Fill语句内至少指定一种填充类型。 - -### 查询结果的分页控制 -为方便用户在对IoTDB进行查询时更好的进行结果阅读,IoTDB为用户提供了[LIMIT/SLIMIT](/#/Documents/progress/chap4/sec7)子句以及[OFFSET/SOFFSET](/#/Documents/progress/chap4/sec7)子句。使用LIMIT和SLIMIT子句可以允许用户对查询结果的行数和列数进行控制,使用OFFSET和SOFFSET子句可以允许用户设定结果展示的起始位置。 - -值得说明的是,LIMIT以及OFFSET/SOFFSET子句均不改变查询的实际执行过程,仅对查询返回的结果进行约束。 - -本章节主要介绍查询结果分页控制的相关示例。同时你也可以使用[Java JDBC](/#/Documents/progress/chap6/sec1)标准接口来执行相关的查询语句。 - -#### 查询结果的行数控制 - -通过使用LIMIT和OFFSET子句,用户可以对查询结果进行与行有关的控制。我们将通过以下几个例子来示范如何使用LIMIT和OFFSET子句对查询结果的行数进行控制。 - -* 例1:基本的LIMIT 子句 - -SQL语句为: - -``` -select status, temperature from root.ln.wf01.wt01 limit 10 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为“供电状态(status)”和“温度传感器(temperature)”;该语句要求返回查询结果的前10行。 - -该SQL语句的执行结果如下: - -
- - -* 例2:带OFFSET的LIMIT子句 - -SQL语句为: - -``` -select status, temperature from root.ln.wf01.wt01 limit 5 offset 3 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为“供电状态(status)”和“温度传感器(temperature)”;该语句要求返回查询结果的查询结果 的第3行到第7行(首行为第0行)。 - -该SQL语句的执行结果如下: - -
- -* 例3:与WHERE子句结合的LIMIT子句 - -SQL语句为: - -``` -select status,temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time< 2017-11-01T00:12:00.000 limit 2 offset 3 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为“供电状态(status)”和“温度传感器(temperature)”;该语句要求选择出“2017-11-01T00:05:00.000”至“2017-11-01T00:12:00.000”之间的所选时间序列值的查询结果的第3行到第4行(首行为第0行)。 - -该SQL语句的执行结果如下: - -
- -* 例4:与GROUP BY子句结合的LIMIT子句 - -SQL语句为: - -``` -select count(status), max_value(temperature) from root.ln.wf01.wt01 group by (1d,[2017-11-01T00:00:00, 2017-11-07T23:00:00]) limit 5 offset 3 -``` -其含义为: - -返回查询结果的第3行到第7行(首行为第0行)。 - -该SQL语句的执行结果如下: - -
- -值得说明的是,由于当前FILL子句仅能对某时间点的时间序列缺失值进行填充,FILL子句的执行结果为一行,因此LIMIT和OFFSET不允许和FILL子句结合使用,否则会提示错误。例如执行如下SQL语句: - -``` -select temperature from root.sgcc.wf03.wt01 where time = 2017-11-01T16:37:50.000 fill(float[previous, 1m]) limit 10 -``` - -该SQL语句将进行无法执行,并给出相应的错误提示,提示如下: - -
- -#### 查询结果的列数控制 - -SLIMIT子句与LIMIT的子句用法相同,可以被用于与WHERE子句、GROUP BY等子句组合,也可以与FILL子句组合,我们将通过以下几个例子来示范如何使用SLIMIT和SOFFSET子句对查询结果的行数进行控制。 - -通过下面几个例子来示范如何使用SLIMIT子句对查询结果的列数进行控制。 - -* 例1:基本的SLIMIT子句 - -SQL语句为: - -``` -select * from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 slimit 1 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为该设备下的第0列“供电状态(status)”;该语句要求选择出“2017-11-01T00:05:00.000”至“2017-11-01T00:12:00.000”之间的所选时间序列值的查询结果。 - -该SQL语句的执行结果如下: - -
- -* 例2:带SOFFSET的SLIMIT子句 - -SQL语句为: - -``` -select * from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 slimit 1 soffset 1 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为该设备下的第1列“温度传感器(temperature)”(首列为第0列);该语句要求选择出“2017-11-01T00:05:00.000”至“2017-11-01T00:12:00.000”之间的所选时间序列值的查询结果。 - -该SQL语句的执行结果如下: - -
- -* 例3:与GROUP BY子句结合 - -SQL语句为: - -``` -select max_value(*) from root.ln.wf01.wt01 group by (1d, [2017-11-01T00:00:00, 2017-11-07T23:00:00]) slimit 1 soffset 1 -``` - -该SQL语句的执行结果如下: - -
- -* 例4:与FILL子句结合 - -SQL语句为: - -``` -select * from root.sgcc.wf03.wt01 where time = 2017-11-01T16:37:50.000 fill(float[previous, 1m]) slimit 1 soffset 1 -``` -该FILL子句的具体含义请参见本文第4.4.4.1.1节。 - -该SQL语句的执行结果如下: - -
- -值得说明的是,在使用过程中,SLIMIT子句只能和带星路径或者前缀路径查询结合使用,与仅含完整路径查询结合使用会提示错误。例如执行如下SQL语句: - -``` -select status,temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 slimit 1 -``` - -该SQL语句将进行无法执行,并给出相应的错误提示,提示如下: - -
- -#### 查询结果的行列控制 - -除查询结果的行或列控制,IoTDB允许用户同时对查询结果进行行列控制。下面示范一个完整的带有LIMIT子句和SLIMIT子句的例子。 - -SQL语句为: - -``` -select * from root.ln.wf01.wt01 limit 10 offset 100 slimit 2 soffset 0 -``` -其含义为: - -被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为该设备下的第0列到第1列(首列为第0列);该语句要求返回所选时间序列值的查询结果的第100行到109行(首行为第0行)。 - -该SQL语句的执行结果如下: - -
- -#### 错误情况的处理 - -当LIMIT/SLIMIT的参数N/SN超出结果集大小时,IoTDB将正常返回全部结果。例如如下SQL语句的执行结果仅有6行,我们通过limit语句选取其前100行: - -``` -select status,temperature from root.ln.wf01.wt01 -where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 -limit 100 -``` -该SQL语句的执行结果如下: - -
- -当LIMIT/SLIMIT的参数N/SN超出允许的最大值(N/SN为int32类型)时,当参数超过阈值时,系统会提示相应错误。例如执行如下SQL语句: - -``` -select status,temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 limit 1234567890123456789 -``` -该SQL语句将进行无法执行,并给出相应的错误提示,错误提示如下: - -
- -当LIMIT/SLIMIT的参数N/SN不为正整数时,系统会给出相应的错误提示,例如执行如下错误语句: - -``` -select status,temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 limit 13.1 -``` - -该SQL语句将进行无法执行,并给出相应的错误提示,错误提示如下: - -
- -当OFFSET的参数OffsetValue超出结果集大小时,返回结果将为空。例如执行如下SQL语句: - -``` -select status,temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 limit 2 offset 6 -``` - -该SQL语句的执行结果如下: - -
- -当SOFFSET的参数SOffsetValue超出可选时间序列范围时,提示错误信息,例如执行如下SQL语句: - -``` -select * from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000 slimit 1 soffset 2 -``` -该SQL语句的执行结果如下: - -
diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/5-Data Maintenance.md b/docs/Documentation-CHN/UserGuide/4-Operation Manual/5-Data Maintenance.md deleted file mode 100644 index f487c68c8ed08043e801dde8f9834508f51dcf59..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/5-Data Maintenance.md +++ /dev/null @@ -1,88 +0,0 @@ - - -# 第4章 IoTDB操作指南 - -## 数据维护 - - - -### 数据删除 - -用户使用[DELETE语句](/#/Documents/progress/chap4/sec7)可以删除指定的时间序列中符合时间删除条件的数据。在删除数据时,用户可以选择需要删除的一个或多个时间序列、时间序列的前缀、时间序列带*路径对某时间之前的数据进行删除(当前版本暂不支持删除某一闭时间区间范围内的数据)。 - -在JAVA编程环境中,您可以使用[JDBC API](/#/Documents/progress/chap6/sec1)单条或批量执行UPDATE语句。 - -#### 单传感器时间序列值删除 - -以测控ln集团为例,存在这样的使用场景: - -wf02子站的wt02设备在2017-11-01 16:26:00之前的供电状态出现多段错误,且无法分析其正确数据,错误数据影响了与其他设备的关联分析。此时,需要将此时间段前的数据删除。进行此操作的SQL语句为: - -``` -delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; -``` - -#### 多传感器时间序列值删除 - -当ln集团wf02子站的wt02设备在2017-11-01 16:26:00之前的供电状态和设备硬件版本都需要删除,此时可以使用含义更广的[前缀路径或带`*`路径](/#/Documents/progress/chap2/sec1)进行删除操作,进行此操作的SQL语句为: - -``` -delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; -``` -或 - -``` -delete from root.ln.wf02.wt02.* where time <= 2017-11-01T16:26:00; -``` - -需要注意的是,当删除的路径不存在时,IoTDB会提示路径不存在,无法删除数据,如下所示。 -``` -IoTDB> delete from root.ln.wf03.wt02.status where time < now() -Msg: TimeSeries does not exist and its data cannot be deleted -``` diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/8-TsFile Usage.md b/docs/Documentation-CHN/UserGuide/4-Operation Manual/8-TsFile Usage.md deleted file mode 100644 index 228e04620f922849026ee96632a7bc8a85a807a1..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/8-TsFile Usage.md +++ /dev/null @@ -1,23 +0,0 @@ - -# 第4章 IoTDB操作指南 - -Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/5-Management/1-System Monitor.md b/docs/Documentation-CHN/UserGuide/5-Management/1-System Monitor.md deleted file mode 100644 index f455e561527bbcf20f5a4d7542f44827f4f5f3b7..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/5-Management/1-System Monitor.md +++ /dev/null @@ -1,152 +0,0 @@ - - -# 第5章 系统管理 - -## 系统监控 - -当前用户可以使用Java的JConsole工具对正在运行的IoTDB进程进行系统状态监控,或使用IoTDB为用户开放的接口查看数据统计量。 - -### 系统状态监控 - -进入Jconsole监控页面后,首先看到的是IoTDB各类运行情况的概览。在这里,您可以看到[堆内存信息、线程信息、类信息以及服务器的CPU使用情况](https://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html)。 - -### 数据统计监控 - -本模块是IoTDB为用户提供的对其中存储数据信息的数据统计监控方式,我们会在系统中为您记录各个模块的数据统计信息,并将其汇总存入数据库中。当前版本的IoTDB提供IoTDB写入数据的统计功能。 - -用户可以选择开启或关闭数据统计监控功能(您可以设定配置文件中的`enable_stat_monitor`项,详细信息参见[第3.2节](/#/Documents/progress/chap3/sec2))。 - -#### 写入数据统计 - -系统目前对写入数据的统计可分为两大模块: 全局(Global) 写入数据统计和存储组(Storage Group) 写入数据统计。 全局统计量记录了所有写入数据的点数、请求数统计,存储组统计量对某一个存储组的写入数据进行了统计,系统默认设定每 5 秒 (若需更改统计频率,您可以设定配置文件中的`back_loop_period_in_second`项,详细信息参见本文[3.2节](/#/Documents/progress/chap3/sec2)) 将统计量写入 IoTDB 中,并以系统指定的命名方式存储。系统刷新或者重启后, IoTDB 不对统计量做恢复处理,统计量从零值重新开始计算。 - -为了避免统计信息占用过多空间,我们为统计信息加入定期清除无效数据的机制。系统将每隔一段时间删除无效数据。用户可以通过设置删除机制触发频率(`stat_monitor_retain_interval_in_second`项,默认为600s,详细信息参见本文[4.2节](/#/Documents/progress/chap3/sec2))配置删除数据的频率,通过设置有效数据的期限(`stat_monitor_detect_freq_in_second`项,默认为600s,详细信息参见本文[3.2节](/#/Documents/progress/chap3/sec2))设置有效数据的范围,即距离清除操作触发时间为`stat_monitor_detect_freq_in_second`以内的数据为有效数据。为了保证系统的稳定,不允许频繁地删除统计量,因此如果配置参数的时间小于默认值,系统不采用配置参数而使用默认参数。 - -注:当前版本统计的写入数据统计信息会同时统计用户写入的数据与系统内部监控数据。 - -写入数据统计项列表: - -* TOTAL_POINTS (全局) - -|名字| TOTAL\_POINTS | -|:---:|:---| -|描述| 写入总点数| -|时间序列名称| root.stats.write.global.TOTAL\_POINTS | -|服务器重启后是否重置| 是 | -|例子| select TOTAL_POINTS from root.stats.write.global| - -* TOTAL\_REQ\_SUCCESS (全局) - -|名字| TOTAL\_REQ\_SUCCESS | -|:---:|:---| -|描述| 写入请求成功次数| -|时间序列名称| root.stats.write.global.TOTAL\_REQ\_SUCCESS | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_REQ\_SUCCESS from root.stats.write.global| - -* TOTAL\_REQ\_FAIL (全局) - -|名字| TOTAL\_REQ\_FAIL | -|:---:|:---| -|描述| 写入请求失败次数| -|时间序列名称| root.stats.write.global.TOTAL\_REQ\_FAIL | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_REQ\_FAIL from root.stats.write.global| - - -* TOTAL\_POINTS\_FAIL (全局) - -|名字| TOTAL\_POINTS\_FAIL | -|:---:|:---| -|描述| 写入点数数百次数| -|时间序列名称| root.stats.write.global.TOTAL\_POINTS\_FAIL | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_POINTS\_FAIL from root.stats.write.global| - - -* TOTAL\_POINTS\_SUCCESS (全局) - -|名字| TOTAL\_POINTS\_SUCCESS | -|:---:|:---| -|描述| 写入点数成功次数| -|时间序列名称| root.stats.write.global.TOTAL\_POINTS\_SUCCESS | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_POINTS\_SUCCESS from root.stats.write.global| - -* TOTAL\_REQ\_SUCCESS (STORAGE GROUP) - -|名字| TOTAL\_REQ\_SUCCESS | -|:---:|:---| -|描述| 写入存储组成功次数| -|时间序列名称| root.stats.write.\.TOTAL\_REQ\_SUCCESS | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_REQ\_SUCCESS from root.stats.write.\| - -* TOTAL\_REQ\_FAIL (STORAGE GROUP) - -|名字| TOTAL\_REQ\_FAIL | -|:---:|:---| -|描述| 写入某个Storage group的请求失败次数| -|时间序列名称| root.stats.write.\.TOTAL\_REQ\_FAIL | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_REQ\_FAIL from root.stats.write.\| - - -* TOTAL\_POINTS\_SUCCESS (STORAGE GROUP) - -|名字| TOTAL\_POINTS\_SUCCESS | -|:---:|:---| -|描述| 写入某个Storage group成功的点数| -|时间序列名称| root.stats.write.\.TOTAL\_POINTS\_SUCCESS | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_POINTS\_SUCCESS from root.stats.write.\| - - -* TOTAL\_POINTS\_FAIL (STORAGE GROUP) - -|名字| TOTAL\_POINTS\_FAIL | -|:---:|:---| -|描述| 写入某个Storage group失败的点数| -|时间序列名称| root.stats.write.\.TOTAL\_POINTS\_FAIL | -|服务器重启后是否重置| 是 | -|例子| select TOTAL\_POINTS\_FAIL from root.stats.write.\| - -> 其中,\ 为所需进行数据统计的存储组名称,存储组中的“.”使用“_”代替。例如:名为'root.a.b'的存储组命名为:'root\_a\_b'。 - -下面为您展示两个具体的例子。用户可以通过`SELECT`语句查询自己所需要的写入数据统计项。(查询方法与普通的时间序列查询方式一致) - -我们以查询全局统计量总写入成功数(`TOTAL_POINTS_SUCCES`)为例,用IoTDB SELECT语句查询它的值。SQL语句如下: - -``` -select TOTAL_POINTS_SUCCESS from root.stats.write.global -``` - -我们以查询存储组root.ln的统计量总写入成功数(`TOTAL_POINTS_SUCCESS`)为例,用IoTDB SELECT语句查询它的值。SQL语句如下: - -``` -select TOTAL_POINTS_SUCCESS from root.stats.write.root_ln -``` - -若您需要查询当前系统的写入统计信息,您可以使用`MAX_VALUE()`聚合函数进行查询,SQL语句如下: -``` -select MAX_VALUE(TOTAL_POINTS_SUCCESS) from root.stats.write.root_ln -``` diff --git a/docs/Documentation-CHN/UserGuide/5-Management/2-Performance Monitor.md b/docs/Documentation-CHN/UserGuide/5-Management/2-Performance Monitor.md deleted file mode 100644 index 52206aaf0889773b289b86edd268bd5c38411b16..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/5-Management/2-Performance Monitor.md +++ /dev/null @@ -1,83 +0,0 @@ - - -# 第5章 系统管理 -## 性能监控 -### 引言 - -性能监控模块用来监控IOTDB每一个操作的耗时,以便用户更好的了解数据库的整体性能。此模块会统计每一种操作的平均耗时,以及耗时在一定时间区间内(1ms,4ms,16ms,64ms,256ms,1024ms,以上)的操作的比例。输出文件在log_measure.log中。输出样例如下: - - - -### 配置参数 - -配置文件位置:conf/iotdb-engine.properties - -
**表 -配置参数以及描述项** - -|参数|默认值|描述| -|:---|:---|:---| -|enable\_performance\_stat|false|是否开启性能监控模块| -|performance\_stat\_display\_interval|60000|打印统计结果的时间延迟,以毫秒为单位| -|performance_stat_memory_in_kb|20|性能监控模块使用的内存阈值,单位为KB| -
- -### 利用JMX MBean动态调节参数 - -通过端口31999连接jconsole,并在上方菜单项中选择‘MBean’. 展开侧边框并选择 'org.apache.iotdb.db.cost.statistic'. 将会得到如下图所示结果: - - - -**属性** - -1. EnableStat:是否开启性能监控模块,如果被设置为true,则性能监控模块会记录每个操作的耗时并打印结果。这个参数不能直接通过jconsole直接更改,但可通过下方的函数来进行动态设置。 -2. DisplayIntervalInMs:相邻两次打印结果的时间间隔。这个参数可以直接设置,但它要等性能监控模块重启才会生效。重启性能监控模块可以通过先调用 stopStatistic()然后调用startContinuousStatistics()或者直接调用 startOneTimeStatistics()实现。 -3. OperationSwitch:这个属性用来展示针对每一种操作是否开启了监控统计,map的键为操作的名字,值为是否针对这种操作开启性能监控。这个参数不能直接通过jconsole直接更改,但可通过下方的 'changeOperationSwitch()'函数来进行动态设置。 - -**操作** - -1. startContinuousStatistics:开启性能监控并以‘DisplayIntervalInMs’的时间间隔打印统计结果。 -2. startOneTimeStatistics:开启性能监控并以‘DisplayIntervalInMs’的时间延迟打印一次统计结果。 -3. stopStatistic:关闭性能监控。 -4. clearStatisticalState(): 清除以统计的结果,从新开始统计。 -5. changeOperationSwitch(String operationName, Boolean operationState):设置是否针对每一种不同的操作开启监控。参数‘operationName是操作的名称,在OperationSwitch属性中展示了所有操作的名称。参数 ‘operationState’是操作的状态,打开或者关闭。如果状态设置成功则此函数会返回true,否则返回false。 - -### 自定义操作类型监控其他区域 - -**增加操作项** - -在org.apache.iotdb.db.cost.statistic.Operation类中增加一个枚举项来表示新增的操作. - -**在监控区域增加监控代码** - -在监控开始区域增加计时代码: - - long t0 = System. currentTimeMillis(); - -在监控结束区域增加记录代码: - - Measurement.INSTANCE.addOperationLatency(Operation, t0); - -## cache命中率统计 - -### 概述 - -为了提高查询性能,IOTDB对ChunkMetaData和TsFileMetaData进行了缓存。用户可以通过debug级别的日志以及MXBean两种方式来查看缓存的命中率,并根据缓存命中率以及系统内存来调节缓存所使用的内存大小。使用MXBean查看缓存命中率的方法为: -1. 通过端口31999连接jconsole,并在上方菜单项中选择‘MBean’. -2. 展开侧边框并选择 'org.apache.iotdb.db.service'. 将会得到如下图所示结果: - - \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/5-Management/3-System log.md b/docs/Documentation-CHN/UserGuide/5-Management/3-System log.md deleted file mode 100644 index b7a954636930aefe7a2f548a4427aa63b7399873..0000000000000000000000000000000000000000 --- a/docs/Documentation-CHN/UserGuide/5-Management/3-System log.md +++ /dev/null @@ -1,64 +0,0 @@ - - -# 第5章 系统管理 - -## 系统日志 - -IoTDB支持用户通过修改日志配置文件的方式对IoTDB系统日志(如日志输出级别等)进行配置,系统日志配置文件默认位置在$IOTDB_HOME/conf文件夹下,默认的日志配置文件名为logback.xml。用户可以通过增加或更改其中的xml树型节点参数对系统运行日志的相关配置进行修改。需要说明的是,使用日志配置文件对系统日志进行配置并非修改完文件立刻生效,而是重启IoTDB系统后生效。详细配置说明参看本文日志文件配置说明。 - -同时,为了方便在系统运行过程中运维人员对系统的调试,我们为系统运维人员提供了动态修改日志配置的JMX接口,能够在系统不重启的前提下实时对系统的Log模块进行配置。详细使用方法参看动态系统日志配置说明)。 - -### 动态系统日志配置说明 - -#### 连接JMX - -本节以Jconsole为例介绍连接JMX并进入动态系统日志配置模块的方法。启动Jconsole控制页面,在新建连接处建立与IoTDB Server的JMX连接(可以选择本地进程或给定IoTDB的IP及PORT进行远程连接,IoTDB的JMX服务默认运行端口为31999),如下图使用远程进程连接Localhost下运行在31999端口的IoTDB JMX服务。 - - - -连接到JMX后,您可以通过MBean选项卡找到名为`ch.qos.logback.classic`的`MBean`,如下图所示。 - - - -在`ch.qos.logback.classic`的MBean操作(Operations)选项中,可以看到当前动态系统日志配置支持的5种接口,您可以通过使用相应的方法,来执行相应的操作,操作页面如图。 - - - -#### 动态系统日志接口说明 - -* reloadDefaultConfiguration接口 - -该方法为重新加载默认的logback配置文件,用户可以先对默认的配置文件进行修改,然后调用该方法将修改后的配置文件重新加载到系统中,使其生效。 - -* reloadByFileName接口 - -该方法为加载一个指定路径的logback配置文件,并使其生效。该方法接受一个名为p1的String类型的参数,该参数为需要指定加载的配置文件路径。 - -* getLoggerEffectiveLevel接口 - -该方法为获取指定Logger当前生效的日志级别。该方法接受一个名为p1的String类型的参数,该参数为指定Logger的名称。该方法返回指定Logger当前生效的日志级别。 - -* getLoggerLevel接口 - -该方法为获取指定Logger的日志级别。该方法接受一个名为p1的String类型的参数,该参数为指定Logger的名称。该方法返回指定Logger的日志级别。 - -需要注意的是,该方法与`getLoggerEffectiveLevel`方法的区别在于,该方法返回的是指定Logger在配置文件中被设定的日志级别,如果用户没有对该Logger进行日志级别的设定,则返回空。按照Logback的日志级别继承机制,如果一个Logger没有被显示地设定日志级别,其将会从其最近的祖先继承日志级别的设定。这时,调用`getLoggerEffectiveLevel`方法将返回该Logger生效的日志级别;而调用本节所述方法,将返回空。 diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/2-Data Model Selection.md b/docs/Documentation-CHN/UserGuide/5-Operation Manual/1-DDL (Data Definition Language).md similarity index 61% rename from docs/Documentation-CHN/UserGuide/4-Operation Manual/2-Data Model Selection.md rename to docs/Documentation-CHN/UserGuide/5-Operation Manual/1-DDL (Data Definition Language).md index 603eb7af4e5a5750406bbc5a7479e6707ed12e8e..42a1d3b985e8f4f29d84329a5c9abd753d147acb 100644 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/2-Data Model Selection.md +++ b/docs/Documentation-CHN/UserGuide/5-Operation Manual/1-DDL (Data Definition Language).md @@ -19,32 +19,12 @@ --> -# 第4章 IoTDB操作指南 - -## 样例数据 - -我们为您提供一份简化的[样例数据](https://github.com/apache/incubator-iotdb/blob/master/docs/Documentation/OtherMaterial-Sample%20Data.txt)。 - -下载文件: [IoTDB-SampleData.txt](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt). - - -## 数据模型选用与创建 - -在向IoTDB导入数据之前,首先要根据样例数据选择合适的数据存储模型,然后使用[SET STORAGE GROUP](/#/Documents/progress/chap4/sec7)语句和[CREATE TIMESERIES](/#/Documents/progress/chap4/sec7)语句设置存储组,并创建时间序列。 - -### 选用存储模型 - -根据本文描述的[数据](https://github.com/apache/incubator-iotdb/blob/master/docs/Documentation/OtherMaterial-Sample%20Data.txt)属性层级,按照属性涵盖范围以及它们之间的从属关系,我们可将其表示为如下图3.1的属性层级组织结构,其层级关系为:集团层-电场层-设备层-传感器层。其中ROOT为根节点,传感器层的每一个节点称为叶子节点。在使用IoTDB的过程中,您可以直接将由ROOT节点到每一个叶子节点路径上的属性用“.”连接,将其作为一个IoTDB的时间序列的名称。图3.1中最左侧的路径可以生成一个名为`ROOT.ln.wf01.wt01.status`的时间序列。 - -
- -**图3.1 属性层级组织结构**
- -得到时间序列的名称之后,我们需要根据数据的实际场景和规模设置存储组。由于在本文所述场景中,每次到达的数据通常以集团为单位(即数据可能为跨电场、跨设备的),为了写入数据时避免频繁切换IO降低系统速度,且满足用户以集团为单位进行物理隔离数据的要求,我们将存储组设置在集团层。 +# 第5章 IoTDB操作指南 +## DDL (数据定义语言) ### 创建存储组 -存储模型选用后,我们可以根据存储模型建立相应的存储组。创建存储组的SQL语句如下所示: +我们可以根据存储模型建立相应的存储组。创建存储组的SQL语句如下所示: ``` IoTDB > set storage group to root.ln @@ -62,7 +42,7 @@ Msg: org.apache.iotdb.exception.MetadataErrorException: org.apache.iotdb.excepti ### 查看存储组 -在存储组创建后,我们可以使用[SHOW STORAGE GROUP](/#/Documents/progress/chap4/sec7)语句来查看所有的存储组,SQL语句如下所示: +在存储组创建后,我们可以使用[SHOW STORAGE GROUP](/#/Documents/progress/chap5/sec4)语句来查看所有的存储组,SQL语句如下所示: ``` IoTDB> show storage group @@ -112,10 +92,45 @@ IoTDB> show timeseries root.ln 需要注意的是,当查询路径不存在时,系统会返回0条时间序列。 -### 注意事项 +### 统计时间序列总数 + +IoTDB支持使用`COUNT TIMESERIES`来统计一条路径中的时间序列个数。SQL语句如下所示: +``` +IoTDB > COUNT TIMESERIES root +IoTDB > COUNT TIMESERIES root.ln +IoTDB > COUNT TIMESERIES root.ln.*.*.status +IoTDB > COUNT TIMESERIES root.ln.wf01.wt01.status +``` + +### 删除时间序列 +我们可以使用`DELETE TimeSeries `语句来删除我们之前创建的时间序列。SQL语句如下所示: +``` +IoTDB> delete timeseries root.ln.wf01.wt01.status +IoTDB> delete timeseries root.ln.wf01.wt01.temperature, root.ln.wf02.wt02.hardware +IoTDB> delete timeseries root.ln.wf02* +``` + +### 查看设备 +我们可以通过使用`SHOW DEVICES`语句查看当前所有设备。SQL语句如下所示: +``` +IoTDB> show devices +``` -当前版本对用户操作的数据规模进行一些限制: +## TTL +IoTDB支持对存储组级别设置数据存活时间(TTL),这使得IoTDB可以定期、自动地删除一定时间之前的数据。合理使用TTL +可以帮助您控制IoTDB占用的总磁盘空间以避免出现磁盘写满等异常。并且,随着文件数量的增多,查询性能往往随之下降, +内存占用也会有所提高。及时地删除一些较老的文件有助于使查询性能维持在一个较高的水平和减少内存资源的占用。 + +### 设置 TTL +设置TTL的SQL语句如下所示: +``` +IoTDB> set ttl to root.ln 3600000 +``` +这个例子表示在`root.ln`存储组中,只有最近一个小时的数据将会保存,旧数据会被移除或不可见。 + +### 取消 TTL +``` +IoTDB> set ttl to root.ln 3600000。 +``` -限制1:假设运行时IoTDB分配到的JVM内存大小为p,用户自定义的每次将内存中的数据写入到磁盘时的大小([group_size_in_byte](/#/Documents/progress/chap3/sec2))为q。存储组的数量不能超过p/q。 -限制2:时间序列的数量不超过运行时IoTDB分配到的JVM内存与20KB的比值。 diff --git a/docs/Documentation-CHN/UserGuide/5-Operation Manual/2-DML (Data Manipulation Languange).md b/docs/Documentation-CHN/UserGuide/5-Operation Manual/2-DML (Data Manipulation Languange).md new file mode 100644 index 0000000000000000000000000000000000000000..d9a9c8ebec1a43e45d8bd0909b2a145678f9a829 --- /dev/null +++ b/docs/Documentation-CHN/UserGuide/5-Operation Manual/2-DML (Data Manipulation Languange).md @@ -0,0 +1,193 @@ + + +# 第5章 IoTDB操作指南 +## DML (数据操作语言) +## 数据接入 + +IoTDB为用户提供多种插入实时数据的方式,例如在[Cli/Shell工具](/#/Documents/progress/chap4/sec1)中直接输入插入数据的INSERT语句,或使用Java API(标准[Java JDBC](/#/Documents/progress/chap4/sec2)接口)单条或批量执行插入数据的INSERT语句。 + +本节主要为您介绍实时数据接入的INSERT语句在场景中的实际使用示例,有关INSERT SQL语句的详细语法请参见本文[INSERT语句](/#/Documents/progress/chap5/sec4)节。 + +### 使用INSERT语句 +使用INSERT语句可以向指定的已经创建的一条或多条时间序列中插入数据。对于每一条数据,均由一个时间戳类型的时间戳和一个数值或布尔值、字符串类型的传感器采集值组成。 + +在本节的场景实例下,以其中的两个时间序列`root.ln.wf02.wt02.status`和`root.ln.wf02.wt02.hardware`为例 ,它们的数据类型分别为BOOLEAN和TEXT。 + +单列数据插入示例代码如下: +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) +IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1, "v1") +``` + +以上示例代码将长整型的timestamp以及值为true的数据插入到时间序列`root.ln.wf02.wt02.status`中和将长整型的timestamp以及值为”v1”的数据插入到时间序列`root.ln.wf02.wt02.hardware`中。执行成功后会返回执行时间,代表数据插入已完成。 + +> 注意:在IoTDB中,TEXT类型的数据单双引号都可以来表示,上面的插入语句是用的是双引号表示TEXT类型数据,下面的示例将使用单引号表示TEXT类型数据。 + +INSERT语句还可以支持在同一个时间点下多列数据的插入,同时向2时间点插入上述两个时间序列的值,多列数据插入示例代码如下: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp, status, hardware) VALUES (2, false, 'v2') +``` + +插入数据后我们可以使用SELECT语句简单查询已插入的数据。 + +``` +IoTDB > select * from root.ln.wf02 where time < 3 +``` + +结果如图所示。由查询结果可以看出,单列、多列数据的插入操作正确执行。 +
+ +### INSERT语句的错误处理 + +若用户向一个不存在的时间序列中插入数据,例如执行以下命令: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp, temperature) values(1,"v1") +``` + +由于`root.ln.wf02.wt02. temperature`时间序列不存在,系统将会返回以下ERROR告知该Timeseries路径不存在: + +``` +Msg: Current deviceId[root.ln.wf02.wt02] does not contains measurement:temperature +``` +若用户插入的数据类型与该Timeseries对应的数据类型不一致,例如执行以下命令: +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1,100) +``` +系统将会返回以下ERROR告知数据类型有误: +``` +error: The TEXT data type should be covered by " or ' +``` + +## 数据查询 +### 时间切片查询 + +本节主要介绍时间切片查询的相关示例,主要使用的是[IoTDB SELECT语句](/#/Documents/progress/chap5/sec4)。同时,您也可以使用[Java JDBC](/#/Documents/progress/chap4/sec2)标准接口来执行相关的查询语句。 + +#### 根据一个时间区间选择一列数据 + +SQL语句为: + +``` +select temperature from root.ln.wf01.wt01 where time < 2017-11-01T00:08:00.000 +``` +其含义为: + +被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为温度传感器(temperature);该语句要求选择出该设备在“2017-11-01T00:08:00.000”(此处可以使用多种时间格式,详情可参看[2.1节](/#/Documents/progress/chap2/sec1))时间点以前的所有温度传感器的值。 + +该SQL语句的执行结果如下: + +
+ +#### 根据一个时间区间选择多列数据 + +SQL语句为: + +``` +select status, temperature from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000; +``` +其含义为: + +被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为供电状态(status)和温度传感器(temperature);该语句要求选择出“2017-11-01T00:05:00.000”至“2017-11-01T00:12:00.000”之间的所选时间序列的值。 + +该SQL语句的执行结果如下: + +
+ +#### 按照多个时间区间选择同一设备的多列数据 + +IoTDB支持在一次查询中指定多个时间区间条件,用户可以根据需求随意组合时间区间条件。例如, + +SQL语句为: + +``` +select status,temperature from root.ln.wf01.wt01 where (time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000) or (time >= 2017-11-01T16:35:00.000 and time <= 2017-11-01T16:37:00.000); +``` +其含义为: + +被选择的设备为ln集团wf01子站wt01设备;被选择的时间序列为“供电状态(status)”和“温度传感器(temperature)”;该语句指定了两个不同的时间区间,分别为“2017-11-01T00:05:00.000至2017-11-01T00:12:00.000”和“2017-11-01T16:35:00.000至2017-11-01T16:37:00.000”;该语句要求选择出满足任一时间区间的被选时间序列的值。 + +该SQL语句的执行结果如下: +
+ + +#### 按照多个时间区间选择不同设备的多列数据 + +该系统支持在一次查询中选择任意列的数据,也就是说,被选择的列可以来源于不同的设备。例如,SQL语句为: + +``` +select wf01.wt01.status,wf02.wt02.hardware from root.ln where (time > 2017-11-01T00:05:00.000 and time < 2017-11-01T00:12:00.000) or (time >= 2017-11-01T16:35:00.000 and time <= 2017-11-01T16:37:00.000); +``` +其含义为: + +被选择的时间序列为“ln集团wf01子站wt01设备的供电状态”以及“ln集团wf02子站wt02设备的硬件版本”;该语句指定了两个时间区间,分别为“2017-11-01T00:05:00.000至2017-11-01T00:12:00.000”和“2017-11-01T16:35:00.000至2017-11-01T16:37:00.000”;该语句要求选择出满足任意时间区间的被选时间序列的值。 + +该SQL语句的执行结果如下: +
+ +### 降频聚合查询 + +本章节主要介绍降频聚合查询的相关示例,主要使用的是IoTDB SELECT语句的[GROUP BY子句](/#/Documents/progress/chap5/sec4),该子句是IoTDB中用于根据用户给定划分条件对结果集进行划分,并对已划分的结果集进行聚合计算的语句。IoTDB支持根据时间间隔对结果集进行划分,默认结果按照时间升序排列。同时,您也可以使用Java JDBC标准接口来执行相关的查询语句。 + +GROUP BY语句为用户提供三类指定参数: + +* 参数1:划分时间轴的时间间隔参数 +* 参数2:时间轴划分原点参数(可选参数) +* 参数3:时间轴显示时间窗参数(一个或多个) + +## 数据维护 + +### 数据删除 + +用户使用[DELETE语句](/#/Documents/progress/chap5/sec4)可以删除指定的时间序列中符合时间删除条件的数据。在删除数据时,用户可以选择需要删除的一个或多个时间序列、时间序列的前缀、时间序列带\*路径对某时间之前的数据进行删除(当前版本暂不支持删除某一闭时间区间范围内的数据)。 + +在JAVA编程环境中,您可以使用JDBC API单条或批量执行DELETE语句。 + +#### 单传感器时间序列值删除 + +以测控ln集团为例,存在这样的使用场景: + +wf02子站的wt02设备在2017-11-01 16:26:00之前的供电状态出现多段错误,且无法分析其正确数据,错误数据影响了与其他设备的关联分析。此时,需要将此时间段前的数据删除。进行此操作的SQL语句为: + +``` +delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; +``` + +#### 多传感器时间序列值删除 + +当ln集团wf02子站的wt02设备在2017-11-01 16:26:00之前的供电状态和设备硬件版本都需要删除,此时可以使用含义更广的[前缀路径或带`*`路径](/#/Documents/progress/chap2/sec1)进行删除操作,进行此操作的SQL语句为: + +``` +delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; +``` +或 + +``` +delete from root.ln.wf02.wt02.* where time <= 2017-11-01T16:26:00; +``` + +需要注意的是,当删除的路径不存在时,IoTDB会提示路径不存在,无法删除数据,如下所示。 +``` +IoTDB> delete from root.ln.wf03.wt02.status where time < now() +Msg: TimeSeries does not exist and its data cannot be deleted +``` diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/6-Priviledge Management.md b/docs/Documentation-CHN/UserGuide/5-Operation Manual/3-Account Management Statements.md similarity index 96% rename from docs/Documentation-CHN/UserGuide/4-Operation Manual/6-Priviledge Management.md rename to docs/Documentation-CHN/UserGuide/5-Operation Manual/3-Account Management Statements.md index 9c69b9e60b301c7509925e22b00ecd2262a73de8..f34f79ccb1b948dcd27edf345790fe58dd2d71c9 100644 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/6-Priviledge Management.md +++ b/docs/Documentation-CHN/UserGuide/5-Operation Manual/3-Account Management Statements.md @@ -19,13 +19,13 @@ --> -# 第4章 IoTDB操作指南 +# 第5章 IoTDB操作指南 -## 权限管理 +## 账户管理语句 IoTDB为用户提供了权限管理操作,从而为用户提供对于数据的权限管理功能,保障数据的安全。 -我们将通过以下几个具体的例子为您示范基本的用户权限操作,详细的SQL语句及使用方式详情请参见本文[第5.1节](/#/Documents/progress/chap4/sec7)。同时,在JAVA编程环境中,您可以使用[JDBC API](/#/Documents/progress/chap6/sec1)单条或批量执行权限管理类语句。 +我们将通过以下几个具体的例子为您示范基本的用户权限操作,详细的SQL语句及使用方式详情请参见本文[第5.4节](/#/Documents/progress/chap5/sec4)。同时,在JAVA编程环境中,您可以使用[JDBC API](/#/Documents/progress/chap4/sec2)单条或批量执行权限管理类语句。 ### 基本概念 #### 用户 @@ -46,7 +46,7 @@ IoTDB为用户提供了权限管理操作,从而为用户提供对于数据的 ### 权限操作示例 -根据本文中描述的[样例数据](/#/Documents/progress/chap4/sec2)内容,IoTDB的样例数据可能同时属于ln, sgcc等不同发电集团,不同的发电集团不希望其他发电集团获取自己的数据库数据,因此我们需要将不同的数据在集团层进行权限隔离。 +根据本文中描述的[样例数据](/#/Documents/progress/chap5/sec1)内容,IoTDB的样例数据可能同时属于ln, sgcc等不同发电集团,不同的发电集团不希望其他发电集团获取自己的数据库数据,因此我们需要将不同的数据在集团层进行权限隔离。 #### 创建用户 diff --git a/docs/Documentation-CHN/UserGuide/4-Operation Manual/7-IoTDB Query Language.md b/docs/Documentation-CHN/UserGuide/5-Operation Manual/4-SQL Reference.md similarity index 99% rename from docs/Documentation-CHN/UserGuide/4-Operation Manual/7-IoTDB Query Language.md rename to docs/Documentation-CHN/UserGuide/5-Operation Manual/4-SQL Reference.md index 0823570695654607be98bb29a36475f61208e86b..f65d25d190c78d8bb652aa7410895a147fc7aa67 100644 --- a/docs/Documentation-CHN/UserGuide/4-Operation Manual/7-IoTDB Query Language.md +++ b/docs/Documentation-CHN/UserGuide/5-Operation Manual/4-SQL Reference.md @@ -19,10 +19,8 @@ --> -# 第4章 IoTDB操作指南 - -## IoTDB查询语言 - +# 第5章 IoTDB操作指南 +## SQL 参考文档 ### Schema语句 @@ -129,6 +127,13 @@ Note: The path can be prefix path or timeseries path. Note: This statement can be used in IoTDB Client and JDBC. ``` +* 显示设备语句 +``` +SHOW DEVICE +Eg: IoTDB > SHOW DEVICES +Note: This statement can be used in IoTDB Client and JDBC. +``` + ### 数据管理语句 * 插入记录语句 @@ -439,7 +444,7 @@ Eg: IoTDB > LIST ALL USER OF ROLE roleuser; ALTER USER SET PASSWORD ; roleName:=identifier password:=string -Eg: IoTDB > UPDATE USER tempuser SET PASSWORD 'newpwd'; +Eg: IoTDB > ALTER USER tempuser SET PASSWORD newpwd; ``` ### 功能 diff --git a/docs/Documentation-CHN/UserGuide/9-System Tools/1-Sync Tool.md b/docs/Documentation-CHN/UserGuide/6-System Tools/1-Sync Tool.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/9-System Tools/1-Sync Tool.md rename to docs/Documentation-CHN/UserGuide/6-System Tools/1-Sync Tool.md diff --git a/docs/Documentation-CHN/UserGuide/9-System Tools/2-Memory Estimation Tool.md b/docs/Documentation-CHN/UserGuide/6-System Tools/2-Memory Estimation Tool.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/9-System Tools/2-Memory Estimation Tool.md rename to docs/Documentation-CHN/UserGuide/6-System Tools/2-Memory Estimation Tool.md diff --git a/docs/Documentation-CHN/UserGuide/9-System Tools/3-JMX Tool.md b/docs/Documentation-CHN/UserGuide/6-System Tools/3-JMX Tool.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/9-System Tools/3-JMX Tool.md rename to docs/Documentation-CHN/UserGuide/6-System Tools/3-JMX Tool.md diff --git a/docs/Documentation-CHN/UserGuide/9-System Tools/4-Watermark Tool.md b/docs/Documentation-CHN/UserGuide/6-System Tools/4-Watermark Tool.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/9-System Tools/4-Watermark Tool.md rename to docs/Documentation-CHN/UserGuide/6-System Tools/4-Watermark Tool.md diff --git a/docs/Documentation-CHN/UserGuide/9-System Tools/5-Log Visualizer.md b/docs/Documentation-CHN/UserGuide/6-System Tools/5-Log Visualizer.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/9-System Tools/5-Log Visualizer.md rename to docs/Documentation-CHN/UserGuide/6-System Tools/5-Log Visualizer.md diff --git a/docs/Documentation-CHN/UserGuide/9-System Tools/6-Query History Visualization Tool.md b/docs/Documentation-CHN/UserGuide/6-System Tools/6-Query History Visualization Tool.md similarity index 100% rename from docs/Documentation-CHN/UserGuide/9-System Tools/6-Query History Visualization Tool.md rename to docs/Documentation-CHN/UserGuide/6-System Tools/6-Query History Visualization Tool.md diff --git a/docs/Documentation-CHN/UserGuide/6-System Tools/7-Monitor and Log Tools.md b/docs/Documentation-CHN/UserGuide/6-System Tools/7-Monitor and Log Tools.md new file mode 100644 index 0000000000000000000000000000000000000000..d24f6004f6675c69c049a58b87f523799e0cf80f --- /dev/null +++ b/docs/Documentation-CHN/UserGuide/6-System Tools/7-Monitor and Log Tools.md @@ -0,0 +1,257 @@ + + +# 第6章: 系统工具 +## 监控与日志工具 +## 系统监控 + +当前用户可以使用Java的JConsole工具对正在运行的IoTDB进程进行系统状态监控,或使用IoTDB为用户开放的接口查看数据统计量。 + +### 系统状态监控 + +进入Jconsole监控页面后,首先看到的是IoTDB各类运行情况的概览。在这里,您可以看到[堆内存信息、线程信息、类信息以及服务器的CPU使用情况](https://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html)。 + +### 数据统计监控 + +本模块是IoTDB为用户提供的对其中存储数据信息的数据统计监控方式,我们会在系统中为您记录各个模块的数据统计信息,并将其汇总存入数据库中。当前版本的IoTDB提供IoTDB写入数据的统计功能。 + +用户可以选择开启或关闭数据统计监控功能(您可以设定配置文件中的`enable_stat_monitor`项,详细信息参见[第3.4节](/#/Documents/progress/chap3/sec4))。 + +#### 写入数据统计 + +系统目前对写入数据的统计可分为两大模块: 全局(Global) 写入数据统计和存储组(Storage Group) 写入数据统计。 全局统计量记录了所有写入数据的点数、请求数统计,存储组统计量对某一个存储组的写入数据进行了统计,系统默认设定每 5 秒 (若需更改统计频率,您可以设定配置文件中的`back_loop_period_in_second`项,详细信息参见本文[3.4节](/#/Documents/progress/chap3/sec4)) 将统计量写入 IoTDB 中,并以系统指定的命名方式存储。系统刷新或者重启后, IoTDB 不对统计量做恢复处理,统计量从零值重新开始计算。 + +为了避免统计信息占用过多空间,我们为统计信息加入定期清除无效数据的机制。系统将每隔一段时间删除无效数据。用户可以通过设置删除机制触发频率(`stat_monitor_retain_interval_in_second`项,默认为600s,详细信息参见本文[3.4节](/#/Documents/progress/chap3/sec4))配置删除数据的频率,通过设置有效数据的期限(`stat_monitor_detect_freq_in_second`项,默认为600s,详细信息参见本文[3.4节](/#/Documents/progress/chap3/sec4))设置有效数据的范围,即距离清除操作触发时间为`stat_monitor_detect_freq_in_second`以内的数据为有效数据。为了保证系统的稳定,不允许频繁地删除统计量,因此如果配置参数的时间小于默认值,系统不采用配置参数而使用默认参数。 + +注:当前版本统计的写入数据统计信息会同时统计用户写入的数据与系统内部监控数据。 + +写入数据统计项列表: + +* TOTAL_POINTS (全局) + +|名字| TOTAL\_POINTS | +|:---:|:---| +|描述| 写入总点数| +|时间序列名称| root.stats.write.global.TOTAL\_POINTS | +|服务器重启后是否重置| 是 | +|例子| select TOTAL_POINTS from root.stats.write.global| + +* TOTAL\_REQ\_SUCCESS (全局) + +|名字| TOTAL\_REQ\_SUCCESS | +|:---:|:---| +|描述| 写入请求成功次数| +|时间序列名称| root.stats.write.global.TOTAL\_REQ\_SUCCESS | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_REQ\_SUCCESS from root.stats.write.global| + +* TOTAL\_REQ\_FAIL (全局) + +|名字| TOTAL\_REQ\_FAIL | +|:---:|:---| +|描述| 写入请求失败次数| +|时间序列名称| root.stats.write.global.TOTAL\_REQ\_FAIL | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_REQ\_FAIL from root.stats.write.global| + + +* TOTAL\_POINTS\_FAIL (全局) + +|名字| TOTAL\_POINTS\_FAIL | +|:---:|:---| +|描述| 写入点数失败次数| +|时间序列名称| root.stats.write.global.TOTAL\_POINTS\_FAIL | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_POINTS\_FAIL from root.stats.write.global| + + +* TOTAL\_POINTS\_SUCCESS (全局) + +|名字| TOTAL\_POINTS\_SUCCESS | +|:---:|:---| +|描述| 写入点数成功次数| +|时间序列名称| root.stats.write.global.TOTAL\_POINTS\_SUCCESS | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_POINTS\_SUCCESS from root.stats.write.global| + +* TOTAL\_REQ\_SUCCESS (存储组) + +|名字| TOTAL\_REQ\_SUCCESS | +|:---:|:---| +|描述| 写入存储组成功次数| +|时间序列名称| root.stats.write.\.TOTAL\_REQ\_SUCCESS | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_REQ\_SUCCESS from root.stats.write.\| + +* TOTAL\_REQ\_FAIL (存储组) + +|名字| TOTAL\_REQ\_FAIL | +|:---:|:---| +|描述| 写入某个Storage group的请求失败次数| +|时间序列名称| root.stats.write.\.TOTAL\_REQ\_FAIL | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_REQ\_FAIL from root.stats.write.\| + + +* TOTAL\_POINTS\_SUCCESS (存储组) + +|名字| TOTAL\_POINTS\_SUCCESS | +|:---:|:---| +|描述| 写入某个Storage group成功的点数| +|时间序列名称| root.stats.write.\.TOTAL\_POINTS\_SUCCESS | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_POINTS\_SUCCESS from root.stats.write.\| + + +* TOTAL\_POINTS\_FAIL (存储组) + +|名字| TOTAL\_POINTS\_FAIL | +|:---:|:---| +|描述| 写入某个Storage group失败的点数| +|时间序列名称| root.stats.write.\.TOTAL\_POINTS\_FAIL | +|服务器重启后是否重置| 是 | +|例子| select TOTAL\_POINTS\_FAIL from root.stats.write.\| + +> 其中,\ 为所需进行数据统计的存储组名称,存储组中的“.”使用“_”代替。例如:名为'root.a.b'的存储组命名为:'root\_a\_b'。 + +下面为您展示两个具体的例子。用户可以通过`SELECT`语句查询自己所需要的写入数据统计项。(查询方法与普通的时间序列查询方式一致) + +我们以查询全局统计量总写入成功数(`TOTAL_POINTS_SUCCES`)为例,用IoTDB SELECT语句查询它的值。SQL语句如下: + +``` +select TOTAL_POINTS_SUCCESS from root.stats.write.global +``` + +我们以查询存储组root.ln的统计量总写入成功数(`TOTAL_POINTS_SUCCESS`)为例,用IoTDB SELECT语句查询它的值。SQL语句如下: + +``` +select TOTAL_POINTS_SUCCESS from root.stats.write.root_ln +``` + +若您需要查询当前系统的写入统计信息,您可以使用`MAX_VALUE()`聚合函数进行查询,SQL语句如下: +``` +select MAX_VALUE(TOTAL_POINTS_SUCCESS) from root.stats.write.root_ln +``` +## 性能监控 + +性能监控模块用来监控IOTDB每一个操作的耗时,以便用户更好的了解数据库的整体性能。此模块会统计每一种操作的平均耗时,以及耗时在一定时间区间内(1ms,4ms,16ms,64ms,256ms,1024ms,以上)的操作的比例。输出文件在log_measure.log中。输出样例如下: + + + +### 配置参数 + +配置文件位置:conf/iotdb-engine.properties + +
**表 -配置参数以及描述项** + +|参数|默认值|描述| +|:---|:---|:---| +|enable\_performance\_stat|false|是否开启性能监控模块| +|performance\_stat\_display\_interval|60000|打印统计结果的时间延迟,以毫秒为单位| +|performance_stat_memory_in_kb|20|性能监控模块使用的内存阈值,单位为KB| +
+ +### 利用JMX MBean动态调节参数 + +通过端口31999连接jconsole,并在上方菜单项中选择‘MBean’. 展开侧边框并选择 'org.apache.iotdb.db.cost.statistic'. 将会得到如下图所示结果: + + + +**属性** + +1. EnableStat:是否开启性能监控模块,如果被设置为true,则性能监控模块会记录每个操作的耗时并打印结果。这个参数不能直接通过jconsole直接更改,但可通过下方的函数来进行动态设置。 +2. DisplayIntervalInMs:相邻两次打印结果的时间间隔。这个参数可以直接设置,但它要等性能监控模块重启才会生效。重启性能监控模块可以通过先调用 stopStatistic()然后调用startContinuousStatistics()或者直接调用 startOneTimeStatistics()实现。 +3. OperationSwitch:这个属性用来展示针对每一种操作是否开启了监控统计,map的键为操作的名字,值为是否针对这种操作开启性能监控。这个参数不能直接通过jconsole直接更改,但可通过下方的 'changeOperationSwitch()'函数来进行动态设置。 + +**操作** + +1. startContinuousStatistics:开启性能监控并以‘DisplayIntervalInMs’的时间间隔打印统计结果。 +2. startOneTimeStatistics:开启性能监控并以‘DisplayIntervalInMs’的时间延迟打印一次统计结果。 +3. stopStatistic:关闭性能监控。 +4. clearStatisticalState(): 清除以统计的结果,从新开始统计。 +5. changeOperationSwitch(String operationName, Boolean operationState):设置是否针对每一种不同的操作开启监控。参数‘operationName是操作的名称,在OperationSwitch属性中展示了所有操作的名称。参数 ‘operationState’是操作的状态,打开或者关闭。如果状态设置成功则此函数会返回true,否则返回false。 + +### 自定义操作类型监控其他区域 + +**增加操作项** + +在org.apache.iotdb.db.cost.statistic.Operation类中增加一个枚举项来表示新增的操作. + +**在监控区域增加监控代码** + +在监控开始区域增加计时代码: + + long t0 = System. currentTimeMillis(); + +在监控结束区域增加记录代码: + + Measurement.INSTANCE.addOperationLatency(Operation, t0); + +## cache命中率统计 + +### 概述 + +为了提高查询性能,IOTDB对ChunkMetaData和TsFileMetaData进行了缓存。用户可以通过debug级别的日志以及MXBean两种方式来查看缓存的命中率,并根据缓存命中率以及系统内存来调节缓存所使用的内存大小。使用MXBean查看缓存命中率的方法为: +1. 通过端口31999连接jconsole,并在上方菜单项中选择‘MBean’. +2. 展开侧边框并选择 'org.apache.iotdb.db.service'. 将会得到如下图所示结果: + + +## 系统日志 + +IoTDB支持用户通过修改日志配置文件的方式对IoTDB系统日志(如日志输出级别等)进行配置,系统日志配置文件默认位置在$IOTDB_HOME/conf文件夹下,默认的日志配置文件名为logback.xml。用户可以通过增加或更改其中的xml树型节点参数对系统运行日志的相关配置进行修改。详细配置说明参看本文日志文件配置说明。 + +同时,为了方便在系统运行过程中运维人员对系统的调试,我们为系统运维人员提供了动态修改日志配置的JMX接口,能够在系统不重启的前提下实时对系统的Log模块进行配置。详细使用方法参看动态系统日志配置说明)。 + +### 动态系统日志配置说明 + +#### 连接JMX + +本节以Jconsole为例介绍连接JMX并进入动态系统日志配置模块的方法。启动Jconsole控制页面,在新建连接处建立与IoTDB Server的JMX连接(可以选择本地进程或给定IoTDB的IP及PORT进行远程连接,IoTDB的JMX服务默认运行端口为31999),如下图使用远程进程连接Localhost下运行在31999端口的IoTDB JMX服务。 + + + +连接到JMX后,您可以通过MBean选项卡找到名为`ch.qos.logback.classic`的`MBean`,如下图所示。 + + + +在`ch.qos.logback.classic`的MBean操作(Operations)选项中,可以看到当前动态系统日志配置支持的5种接口,您可以通过使用相应的方法,来执行相应的操作,操作页面如图。 + + + +#### 动态系统日志接口说明 + +* reloadDefaultConfiguration接口 + +该方法为重新加载默认的logback配置文件,用户可以先对默认的配置文件进行修改,然后调用该方法将修改后的配置文件重新加载到系统中,使其生效。 + +* reloadByFileName接口 + +该方法为加载一个指定路径的logback配置文件,并使其生效。该方法接受一个名为p1的String类型的参数,该参数为需要指定加载的配置文件路径。 + +* getLoggerEffectiveLevel接口 + +该方法为获取指定Logger当前生效的日志级别。该方法接受一个名为p1的String类型的参数,该参数为指定Logger的名称。该方法返回指定Logger当前生效的日志级别。 + +* getLoggerLevel接口 + +该方法为获取指定Logger的日志级别。该方法接受一个名为p1的String类型的参数,该参数为指定Logger的名称。该方法返回指定Logger的日志级别。 + +需要注意的是,该方法与`getLoggerEffectiveLevel`方法的区别在于,该方法返回的是指定Logger在配置文件中被设定的日志级别,如果用户没有对该Logger进行日志级别的设定,则返回空。按照Logback的日志级别继承机制,如果一个Logger没有被显示地设定日志级别,其将会从其最近的祖先继承日志级别的设定。这时,调用`getLoggerEffectiveLevel`方法将返回该Logger生效的日志级别;而调用本节所述方法,将返回空。 diff --git a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/1-Grafana.md b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/1-Grafana.md similarity index 99% rename from docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/1-Grafana.md rename to docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/1-Grafana.md index 51e472c31f7c5151ff8b047ca4fdf3ff552445e2..07d18179e1b20f6a4e006562086dcab3129a814a 100644 --- a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/1-Grafana.md +++ b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/1-Grafana.md @@ -18,7 +18,7 @@ under the License. --> -# 第10章: 生态集成 +# 第7章: 生态集成 ## 概览 diff --git a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/2-TsFile Hadoop Connector.md b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/2-TsFile Hadoop Connector.md similarity index 99% rename from docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/2-TsFile Hadoop Connector.md rename to docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/2-TsFile Hadoop Connector.md index 1293154d108ad2eaa7506e92aec8891e78613439..2ff68cf88f00127f7cd481c53320de29ffc65abb 100644 --- a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/2-TsFile Hadoop Connector.md +++ b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/2-TsFile Hadoop Connector.md @@ -18,7 +18,7 @@ under the License. --> -# 第10章: 生态集成 +# 第7章: 生态集成 # TsFile的Hadoop连接器 ## 概要 diff --git a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/3-TsFile Spark Connector.md b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/3-TsFile Spark Connector.md similarity index 95% rename from docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/3-TsFile Spark Connector.md rename to docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/3-TsFile Spark Connector.md index 3387b083c85ecbee56d04601c3ac18cf67d03f5d..ae5dc309bd40ce2216f45d830d8de115ffe76e6e 100644 --- a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/3-TsFile Spark Connector.md +++ b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/3-TsFile Spark Connector.md @@ -19,6 +19,6 @@ --> -# 第10章: 生态集成 +# 第7章: 生态集成 # TsFile的Spark连接器 Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/4-Spark IoTDB Connector.md b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/4-Spark IoTDB Connector.md new file mode 100644 index 0000000000000000000000000000000000000000..0f0719edaca34eb099b05379055982634661a838 --- /dev/null +++ b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/4-Spark IoTDB Connector.md @@ -0,0 +1,23 @@ + +# 第7章: 生态集成 + +Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/5-Tsfile Hive Connector.md b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/5-Tsfile Hive Connector.md similarity index 99% rename from docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/5-Tsfile Hive Connector.md rename to docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/5-Tsfile Hive Connector.md index 18255e136c656e995acce10c2938c907bb5d7a52..a6c1a8e1db77ecef12c2ea7b0768750ad1f6ddd8 100644 --- a/docs/Documentation-CHN/UserGuide/10-Ecosystem Integration/5-Tsfile Hive Connector.md +++ b/docs/Documentation-CHN/UserGuide/7-Ecosystem Integration/5-Tsfile Hive Connector.md @@ -18,6 +18,7 @@ under the License. --> +# 第7章: 生态集成 ## 概要 diff --git a/docs/Documentation-CHN/UserGuide/3-Deployment/3-Build and use IoTDB by Dockerfile.md b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/1-Hierarchy.md similarity index 95% rename from docs/Documentation-CHN/UserGuide/3-Deployment/3-Build and use IoTDB by Dockerfile.md rename to docs/Documentation-CHN/UserGuide/8-System Design (Developer)/1-Hierarchy.md index e2cec078dad6820a1d10734b90eeb412c95a4afa..6ac02553ddf0667178792e2b63f55e28196e1517 100644 --- a/docs/Documentation-CHN/UserGuide/3-Deployment/3-Build and use IoTDB by Dockerfile.md +++ b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/1-Hierarchy.md @@ -19,6 +19,6 @@ --> -# 第3章 系统部署 +# 第8章: 系统设计 Coming Soon. \ No newline at end of file diff --git a/docs/Documentation-CHN/UserGuide/5-Management/4-Data Management.md b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/2-Files.md similarity index 99% rename from docs/Documentation-CHN/UserGuide/5-Management/4-Data Management.md rename to docs/Documentation-CHN/UserGuide/8-System Design (Developer)/2-Files.md index 21c459db6251fb96273a0564534954cac589ca20..1c8f38be0e28b61237157b5998b1ff4db52f0122 100644 --- a/docs/Documentation-CHN/UserGuide/5-Management/4-Data Management.md +++ b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/2-Files.md @@ -19,10 +19,9 @@ --> -# 第5章 系统管理 +# 第8章: 系统设计 - -## 数据管理 +## 文件 本节将介绍IoTDB的数据存储方式,便于您对IoTDB的数据管理有一个直观的了解。 diff --git a/docs/Documentation-CHN/UserGuide/8-Distributed Architecture/1-Shared Storage Architecture.md b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/3-Writing Data on HDFS.md similarity index 98% rename from docs/Documentation-CHN/UserGuide/8-Distributed Architecture/1-Shared Storage Architecture.md rename to docs/Documentation-CHN/UserGuide/8-System Design (Developer)/3-Writing Data on HDFS.md index 6ef719fc40a597f17180cc43c3a8cf5ee2e517bf..855fd1af7aa624f8dd8f0115052a18110ff05cb9 100644 --- a/docs/Documentation-CHN/UserGuide/8-Distributed Architecture/1-Shared Storage Architecture.md +++ b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/3-Writing Data on HDFS.md @@ -19,7 +19,9 @@ --> -# 第8章: 分布式架构 +# 第8章: 系统设计 + +## 使用HDFS存储数据 ## 存储共享架构 diff --git a/docs/Documentation-CHN/UserGuide/8-Distributed Architecture/2-Shared Nothing Architecture.md b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/4-Shared Nothing Cluster.md similarity index 97% rename from docs/Documentation-CHN/UserGuide/8-Distributed Architecture/2-Shared Nothing Architecture.md rename to docs/Documentation-CHN/UserGuide/8-System Design (Developer)/4-Shared Nothing Cluster.md index 13fa0b70b4580683144fbdeeb0ca591527af6acb..1c0a79bfe435bf4a354a10062ef14ded3bb45bd4 100644 --- a/docs/Documentation-CHN/UserGuide/8-Distributed Architecture/2-Shared Nothing Architecture.md +++ b/docs/Documentation-CHN/UserGuide/8-System Design (Developer)/4-Shared Nothing Cluster.md @@ -19,7 +19,7 @@ --> -# 第8章: 分布式架构 +# 第8章: 系统设计 ## Shared-nothing 架构 diff --git a/docs/Documentation/UserGuide/0-Content.md b/docs/Documentation/UserGuide/0-Content.md index c23efae870bcb31ef6114608b0caa9265d83eb2e..bb0870255cfccad2bb10ca6ef67572046b4d084b 100644 --- a/docs/Documentation/UserGuide/0-Content.md +++ b/docs/Documentation/UserGuide/0-Content.md @@ -18,57 +18,55 @@ under the License. --> -# Chapter 0: QuickStart + + +# Chapter 0: Get Started * 1-QuickStart * 2-Frequently asked questions -* 3-Reference +* 3-Publication # Chapter 1: Overview * 1-What is IoTDB * 2-Architecture * 3-Scenario * 4-Features # Chapter 2: Concept -* 1-Key Concepts and Terminology -* 2-Data Type -* 3-Encoding -* 4-Compression -# Chapter 3: Deployment -* 1-Deployment -* 2-Configuration -* 3-Build and use IoTDB by Dockerfile -* 4-TsFile library Intallation -# Chapter 4: Operation Manual -* 1-Cli Shell Tool -* 2-Data Model Selection -* 3-Data Import -* 4-Data Query -* 5-Data Maintenance -* 6-Priviledge Management -* 7-IoTDB Query Language -* 8-TsFile Usage -# Chapter 5: Management -* 1-System Monitor -* 2-Performance Monitor -* 3-System log -* 4-Data Management -# Chapter 6: API -* 1-JDBC API -* 2-Session API -* 3-Python API -# Chapter 7: System Design -* 1-Hierarchy -# Chapter 8: Distributed Architecture -* 1-Shared Storage Architecture -* 2-Shared Nothing Architecture -# Chapter 9: System Tools +* 1-Data Modal and Terminology +* 2-Data Type +* 3-Encoding +* 4-Compression +# Chapter 3: Server +* 1-Download +* 2-Single Node Setup +* 3-Cluster Setup +* 4-Config Manual +* 5-Docker Image +# Chapter 4: Client +* 1-Command Line Interface(CLI) +* 2-Programming - JDBC +* 3-Programming - Session +* 4-Programming - Other Language +* 5-Programming - TsFile API (TimeSeries File Format) +# Chapter 5: Operation Manual +* 1-DDL (Data Definition Language) +* 2-DML (Data Manipulation Languange) +* 3-Account Management Statements +* 4-SQL Reference +# Chapter 6: System Tools * 1-Sync Tool * 2-Memory Estimation Tool * 3-JMX Tool * 4-Watermark Tool * 5-Log Visualizer * 6-Query History Visualization Tool -# Chapter 10: Ecosystem Integration +* 7-Monitor and Log Tools +# Chapter 7: Ecosystem Integration * 1-Grafana * 2-TsFile Hadoop Connector * 3-TsFile Spark Connector -* 4-Spark IoTDB Connector \ No newline at end of file +* 4-Spark IoTDB Connector +* 5-Tsfile Hive Connector +# Chapter 8: System Design (Developer) +* 1-Hierarchy +* 2-Files +* 3-Writing Data on HDFS +* 4-Shared Nothing Cluster diff --git a/docs/Documentation/UserGuide/0-QuickStart/1-QuickStart.md b/docs/Documentation/UserGuide/0-Get Started/1-QuickStart.md similarity index 85% rename from docs/Documentation/UserGuide/0-QuickStart/1-QuickStart.md rename to docs/Documentation/UserGuide/0-Get Started/1-QuickStart.md index 34cffaefe02ba60efee72f27f02997c5eadaab1a..46396b54a19c4ce72180f2390a8a5edb653eac2c 100755 --- a/docs/Documentation/UserGuide/0-QuickStart/1-QuickStart.md +++ b/docs/Documentation/UserGuide/0-Get Started/1-QuickStart.md @@ -48,8 +48,7 @@ This short guide will walk you through the basic process of using IoTDB. For a m To use IoTDB, you need to have: 1. Java >= 1.8 (Please make sure the environment path has been set) -2. Maven >= 3.1 (If you want to compile and install IoTDB from source code) -3. Set the max open files num as 65535 to avoid "too many open files" problem. +2. Set the max open files num as 65535 to avoid "too many open files" problem. ## Installation @@ -60,27 +59,14 @@ IoTDB provides you three installation methods, you can refer to the following su * Using Docker:The path to the dockerfile is https://github.com/apache/incubator-iotdb/blob/master/docker/Dockerfile -Here in the Quick Start, we give a brief introduction of using source code to install IoTDB. For further information, please refer to Chapter 4 of the User Guide. +Here in the Quick Start, we give a brief introduction to install IoTDB. For further information, please refer to Chapter 3 of the User Guide. -## Build from source +## Download -You can download the source code from: +You can download the binary file from: +[Here](https://iotdb.apache.org/#/Download) -``` -git clone https://github.com/apache/incubator-iotdb.git -``` - -Under the root path of incubator-iotdb: - -``` -> mvn clean package -DskipTests -``` - -Then the binary version (including both server and client) can be found at **distribution/target/apache-iotdb-{project.version}-incubating-bin.zip** - -> NOTE: Directories "service-rpc/target/generated-sources/thrift" and "server/target/generated-sources/antlr3" need to be added to sources roots to avoid compilation errors in IDE. - -### Configurations +## Configurations configuration files are under "conf" folder @@ -88,7 +74,7 @@ configuration files are under "conf" folder * system config module (`tsfile-format.properties`, `iotdb-engine.properties`) * log config module (`logback.xml`). -For more, see [Chapter3: Deployment](https://iotdb.apache.org/#/Documents/progress/chap3/sec1) in detail. +For more, see [Chapter3: Server](https://iotdb.apache.org/#/Documents/progress/chap3/sec1) in detail. ## Start @@ -239,7 +225,7 @@ or IoTDB> exit ``` -For more on what commands are supported by IoTDB SQL, see [IoTDB SQL Language](https://iotdb.apache.org/#/Documents/progress/chap4/sec7). +For more on what commands are supported by IoTDB SQL, see [SQL Reference](https://iotdb.apache.org/#/Documents/progress/chap5/sec4). ### Stop IoTDB @@ -253,16 +239,6 @@ The server can be stopped with ctrl-C or the following script: > sbin\stop-server.bat ``` -## Only build server - -Under the root path of incubator-iotdb: - -``` -> mvn clean package -pl server -am -DskipTests -``` - -After build, the IoTDB server will be at the folder "server/target/iotdb-server-{project.version}". - ## Only build client diff --git a/docs/Documentation/UserGuide/0-QuickStart/2-Frequently asked questions.md b/docs/Documentation/UserGuide/0-Get Started/2-Frequently asked questions.md similarity index 97% rename from docs/Documentation/UserGuide/0-QuickStart/2-Frequently asked questions.md rename to docs/Documentation/UserGuide/0-Get Started/2-Frequently asked questions.md index 096e4decfa1d86f3920926ad7eeb00b53acb4700..8790cf7a34288290a8a9cc261b02f441563b9f08 100644 --- a/docs/Documentation/UserGuide/0-QuickStart/2-Frequently asked questions.md +++ b/docs/Documentation/UserGuide/0-Get Started/2-Frequently asked questions.md @@ -113,7 +113,7 @@ Yes. IoTDB has intense integration with Open Source Ecosystem. IoTDB supports [H ## How does IoTDB handle duplicate points? -A data point is uniquely identified by a full time series path (e.g. ```root.vehicle.d0.s0```) and timestamp. If you submit a new point with the same path and timestamp as an existing point, +A data point is uniquely identified by a full time series path (e.g. ```root.vehicle.d0.s0```) and timestamp. If you submit a new point with the same path and timestamp as an existing point, IoTDB will update the value of this point instead of inserting a new point. ## How can I tell what type of the specific timeseries? diff --git a/docs/Documentation/UserGuide/0-QuickStart/3-Reference.md b/docs/Documentation/UserGuide/0-Get Started/3-Publication.md similarity index 100% rename from docs/Documentation/UserGuide/0-QuickStart/3-Reference.md rename to docs/Documentation/UserGuide/0-Get Started/3-Publication.md diff --git a/docs/Documentation/UserGuide/1-Overview/2-Architecture.md b/docs/Documentation/UserGuide/1-Overview/2-Architecture.md index d65a722a25397ddf797eff72289c2e9acd725ea2..c03af1a7cb02f87278ad1adc86450f11109fce4a 100644 --- a/docs/Documentation/UserGuide/1-Overview/2-Architecture.md +++ b/docs/Documentation/UserGuide/1-Overview/2-Architecture.md @@ -27,7 +27,7 @@ Besides IoTDB engine, we also developed several components to provide better IoT IoTDB suite can provide a series of functions in the real situation such as data collection, data writing, data storage, data query, data visualization and data analysis. Figure 1.1 shows the overall application architecture brought by all the components of the IoTDB suite. - + As shown in Figure 1.1, users can use JDBC to import timeseries data collected by sensor on the device to local/remote IoTDB. These timeseries data may be system state data (such as server load and CPU memory, etc.), message queue data, timeseries data from applications, or other timeseries data in the database. Users can also write the data directly to the TsFile (local or on HDFS). diff --git a/docs/Documentation/UserGuide/2-Concept/1-Key Concepts and Terminology.md b/docs/Documentation/UserGuide/2-Concept/1-Data Model and Terminology.md similarity index 81% rename from docs/Documentation/UserGuide/2-Concept/1-Key Concepts and Terminology.md rename to docs/Documentation/UserGuide/2-Concept/1-Data Model and Terminology.md index 5967101297e7246e060c9c4fe9119d7693f091e5..d5d08701656f9f81b199835e2f38cb3301d21063 100644 --- a/docs/Documentation/UserGuide/2-Concept/1-Key Concepts and Terminology.md +++ b/docs/Documentation/UserGuide/2-Concept/1-Data Model and Terminology.md @@ -20,9 +20,20 @@ --> # Chapter 2: Concept -## Key Concepts and Terminology +## Data Model and Terminology +To make this manual more practical, we will use a specific scenario example to illustrate how to operate IoTDB databases at all stages of use. See [this page](https://github.com/apache/incubator-iotdb/blob/master/docs/Documentation/OtherMaterial-Sample%20Data.txt) for a look. For convenience, we also provide you with a sample data file in real scenario to import into the IoTDB system for trial and operation. -The following basic concepts are involved in IoTDB: +Download file: [IoTDB-SampleData.txt](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt). + +According to the data attribute layers described in [sample data](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt), we can express it as an attribute hierarchy structure based on the coverage of attributes and the subordinate relationship between them, as shown in Figure 2.1 below. Its hierarchical relationship is: power group layer - power plant layer - device layer - sensor layer. ROOT is the root node, and each node of sensor layer is called a leaf node. In the process of using IoTDB, you can directly connect the attributes on the path from ROOT node to each leaf node with ".", thus forming the name of a timeseries in IoTDB. For example, The left-most path in Figure 2.1 can generate a timeseries named `ROOT.ln.wf01.wt01.status`. + +
+ +**Figure 2.1 Attribute hierarchy structure**
+ +After getting the name of the timeseries, we need to set up the storage group according to the actual scenario and scale of the data. Because in the scenario of this chapter data is usually arrived in the unit of groups (i.e., data may be across electric fields and devices), in order to avoid frequent switching of IO when writing data, and to meet the user's requirement of physical isolation of data in the unit of groups, we set the storage group at the group layer. + +Here are the basic concepts of the model involved in IoTDB: * Device @@ -208,14 +219,3 @@ Relative time refers to the time relative to the server time ```now()``` and ``` ``` > Note:There must be spaces on the left and right of '+' and '-'. -* Value - -The value of a time series is actually the value sent by a sensor to IoTDB. This value can be stored by IoTDB according to the data type. At the same time, users can select the compression mode and the corresponding encoding mode according to the data type of this value. See [Data Type](/#/Documents/progress/chap2/sec2), [Encoding](/#/Documents/progress/chap2/sec3) and [Compression](/#/Documents/progress/chap2/sec4) of this document for details on data type and corresponding encoding. - -* Point - -A data point is made up of a timestamp value pair (timestamp, value). - -* Column - -A column of data contains all values belonging to a time series and the timestamps corresponding to these values. When there are multiple columns of data, IoTDB merges the timestamps into multiple < timestamp-value > pairs (timestamp, value, value,...). diff --git a/docs/Documentation/UserGuide/2-Concept/2-Data Type.md b/docs/Documentation/UserGuide/2-Concept/2-Data Type.md index 04572671b149b909fc9efd27d269057ade91802a..7d2523d25bbe0b378a3125b65d806dd3b943e8cb 100644 --- a/docs/Documentation/UserGuide/2-Concept/2-Data Type.md +++ b/docs/Documentation/UserGuide/2-Concept/2-Data Type.md @@ -31,7 +31,7 @@ IoTDB supports six data types in total: * TEXT (String). -The time series of **FLOAT** and **DOUBLE** type can specify (MAX\_POINT\_NUMBER, see [this page](/#/Documents/progress/chap4/sec7) for more information on how to specify), which is the number of digits after the decimal point of the floating point number, if the encoding method is [RLE](/#/Documents/progress/chap2/sec3) or [TS\_2DIFF](/#/Documents/progress/chap2/sec3) (Refer to [Create Timeseries Statement](/#/Documents/progress/chap5/sec1) for more information on how to specify). If MAX\_POINT\_NUMBER is not specified, the system will use [float\_precision](/#/Documents/progress/chap4/sec2) in the configuration file `tsfile-format.properties`. +The time series of **FLOAT** and **DOUBLE** type can specify (MAX\_POINT\_NUMBER, see [this page](/#/Documents/progress/chap5/sec4) for more information on how to specify), which is the number of digits after the decimal point of the floating point number, if the encoding method is [RLE](/#/Documents/progress/chap2/sec3) or [TS\_2DIFF](/#/Documents/progress/chap2/sec3) (Refer to [Create Timeseries Statement](/#/Documents/progress/chap5/sec4) for more information on how to specify). If MAX\_POINT\_NUMBER is not specified, the system will use [float\_precision](/#/Documents/progress/chap3/sec4) in the configuration file `tsfile-format.properties`. * For Float data value, The data range is (-Integer.MAX_VALUE, Integer.MAX_VALUE), rather than Float.MAX_VALUE, and the max_point_number is 19, it is because of the limition of function Math.round(float) in Java. * For Double data value, The data range is (-Long.MAX_VALUE, Long.MAX_VALUE), rather than Double.MAX_VALUE, and the max_point_number is 19, it is because of the limition of function Math.round(double) in Java (Long.MAX_VALUE=9.22E18). diff --git a/docs/Documentation/UserGuide/2-Concept/3-Encoding.md b/docs/Documentation/UserGuide/2-Concept/3-Encoding.md index 26f1d1bdf9831828165db28e9b8f09e33933cd76..f487c90fb8d4885510aaf7f03abdd88cb3b72a52 100644 --- a/docs/Documentation/UserGuide/2-Concept/3-Encoding.md +++ b/docs/Documentation/UserGuide/2-Concept/3-Encoding.md @@ -32,13 +32,13 @@ PLAIN encoding, the default encoding mode, i.e, no encoding, supports multiple d Second-order differential encoding is more suitable for encoding monotonically increasing or decreasing sequence data, and is not recommended for sequence data with large fluctuations. -Second-order differential encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](/#/Documents/progress/chap5/sec1) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increase or decrease, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. +Second-order differential encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](/#/Documents/progress/chap5/sec4) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increase or decrease, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. * RLE Run-length encoding is more suitable for storing sequence with continuous integer values, and is not recommended for sequence data with most of the time different values. -Run-length encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](/#/Documents/progress/chap4/sec7) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increasing or decreasing, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. +Run-length encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](/#/Documents/progress/chap5/sec4) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increasing or decreasing, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. * GORILLA diff --git a/docs/Documentation/UserGuide/2-Concept/4-Compression.md b/docs/Documentation/UserGuide/2-Concept/4-Compression.md index d7c1ecb3058d3484ed9d70763b4b319d7ccb777d..8ee26aebd44e769fcf56bc0ba63da73d59d67f2c 100644 --- a/docs/Documentation/UserGuide/2-Concept/4-Compression.md +++ b/docs/Documentation/UserGuide/2-Concept/4-Compression.md @@ -25,4 +25,10 @@ When the time series is written and encoded as binary data according to the specified type, IoTDB compresses the data using compression technology to further improve space storage efficiency. Although both encoding and compression are designed to improve storage efficiency, encoding techniques are usually only available for specific data types (e.g., second-order differential encoding is only suitable for INT32 or INT64 data type, and storing floating-point numbers requires multiplying them by 10m to convert to integers), after which the data is converted to a binary stream. The compression method (SNAPPY) compresses the binary stream, so the use of the compression method is no longer limited by the data type. -IoTDB allows you to specify the compression method of the column when creating a time series. IoTDB now supports several kinds of compression: UNCOMPRESSED (no compression), SNAPPY, GZIP, LZ0, SDT, PAA and PLA. The specified syntax for compression is detailed in [Create Timeseries Statement](/#/Documents/progress/chap4/sec7). \ No newline at end of file +IoTDB allows you to specify the compression method of the column when creating a time series, and now supports two compression methods: + +* UMCOMPRESSED + +* SNAPPY + +The specified syntax for compression is detailed in [Create Timeseries Statement](/#/Documents/progress/chap5/sec4). diff --git a/docs/Documentation/UserGuide/3-Deployment/1-Deployment.md b/docs/Documentation/UserGuide/3-Deployment/1-Deployment.md deleted file mode 100644 index 951556dfdc553e6105ac1bb3e36fbb0b62c02eee..0000000000000000000000000000000000000000 --- a/docs/Documentation/UserGuide/3-Deployment/1-Deployment.md +++ /dev/null @@ -1,160 +0,0 @@ - - -# Chapter 3: Deployment - -## Installation - -IoTDB provides you two installation methods, you can refer to the following suggestions, choose one of them: - -* Installation from binary files. Download the binary files from the official website. This is the recommended method, in which you will get a binary released package which is out-of-the-box. -* Installation from source code. If you need to modify the code yourself, you can use this method. - -### Prerequisites - -To install and use IoTDB, you need to have: - -1. Java >= 1.8 (Please make sure the environment path has been set) -2. Maven >= 3.1 (If you want to compile and install IoTDB from source code) -3. TsFile >= 0.7.0 (TsFile Github page: [https://github.com/thulab/tsfile](https://github.com/thulab/tsfile)) -4. IoTDB-JDBC >= 0.7.0 (IoTDB-JDBC Github page: [https://github.com/thulab/iotdb-jdbc](https://github.com/thulab/iotdb-jdbc)) - -TODO: TsFile and IoTDB-JDBC dependencies will be removed after the project reconstruct. - -### Installation from binary files - -IoTDB provides you binary files which contains all the necessary components for the IoTDB system to run. You can get them on our website [http://tsfile.org/download](http://tsfile.org/download). - -``` -NOTE: -iotdb-.tar.gz # For Linux or MacOS -iotdb-.zip # For Windows -``` - -After downloading, you can extract the IoTDB tarball using the following operations: - -``` -Shell > uzip iotdb-.zip # For Windows -Shell > tar -zxf iotdb-.tar.gz # For Linux or MacOS -``` - -The IoTDB project will be at the subfolder named iotdb. The folder will include the following contents: - -``` -iotdb/ <-- root path -| -+- sbin/ <-- script files -| -+- conf/ <-- configuration files -| -+- lib/ <-- project dependencies -| -+- LICENSE <-- LICENSE -``` - -### Installation from source code - -You can get the released source code from https://iotdb.apache.org/#/Download, or from the git repository https://github.com/apache/incubator-iotdb/tree/master - -Now suppose your directory is like this: - -``` -> pwd -/workspace/incubator-iotdb - -> ls -l -incubator-iotdb/ <-- root path -| -+- server/ -| -+- jdbc/ -| -+- client/ -| -... -| -+- pom.xml -``` - -Let `$IOTDB_HOME = /workspace/incubator-iotdb/server/target/iotdb-server-{project.version}` - -Let `$IOTDB_CLI_HOME = /workspace/incubator-iotdb/client/target/iotdb-client-{project.version}` - -Note: -* if `IOTDB_HOME` is not explicitly assigned, -then by default `IOTDB_HOME` is the direct parent directory of `sbin/start-server.sh` on Unix/OS X -(or that of `sbin\start-server.bat` on Windows). - -* if `IOTDB_CLI_HOME` is not explicitly assigned, -then by default `IOTDB_CLI_HOME` is the direct parent directory of `sbin/start-cli.sh` on -Unix/OS X (or that of `sbin\start-cli.bat` on Windows). - -If you are not the first time that building IoTDB, remember deleting the following files: - -``` -> rm -rf $IOTDB_HOME/data/ -> rm -rf $IOTDB_HOME/lib/ -``` - -Then under the root path of incubator-iotdb, you can build IoTDB using Maven: - -``` -> pwd -/workspace/incubator-iotdb - -> mvn clean package -pl server -am -Dmaven.test.skip=true -``` - -If successful, you will see the the following text in the terminal: - -``` -[INFO] ------------------------------------------------------------------------ -[INFO] Reactor Summary: -[INFO] -[INFO] Apache IoTDB (incubating) Project Parent POM ....... SUCCESS [ 6.405 s] -[INFO] TsFile ............................................. SUCCESS [ 10.435 s] -[INFO] Service-rpc ........................................ SUCCESS [ 4.170 s] -[INFO] IoTDB Jdbc ......................................... SUCCESS [ 3.252 s] -[INFO] IoTDB Server ....................................... SUCCESS [ 8.072 s] -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -``` - -Otherwise, you may need to check the error statements and fix the problems. - -After building, the IoTDB project will be at the subfolder named iotdb. The folder will include the following contents: - -``` -$IOTDB_HOME/ -| -+- sbin/ <-- script files -| -+- conf/ <-- configuration files -| -+- lib/ <-- project dependencies -``` - - - -### Installation by Docker (Dockerfile) - -You can build and run a IoTDB docker image by following the guide of [Deployment by Docker](/#/Documents/progress/chap3/sec3) diff --git a/docs/Documentation/UserGuide/3-Deployment/4-TsFile library Installation.md b/docs/Documentation/UserGuide/3-Deployment/4-TsFile library Installation.md deleted file mode 100644 index d7b5b092727d9815b0c7fb5a084c506a5b89cd11..0000000000000000000000000000000000000000 --- a/docs/Documentation/UserGuide/3-Deployment/4-TsFile library Installation.md +++ /dev/null @@ -1,99 +0,0 @@ - - -# Chapter 3: Deployment - - -## TsFile libaray Installation - -TsFile is an important dependency of IoTDB. It is necessary to install TsFile for using IoTDB. - -Before started, maven should be installed. See How to install maven - -There are two ways to use TsFile in your own project. - -* Using as jars: - * Compile the source codes and build to jars - - ``` - git clone https://github.com/apache/incubator-iotdb.git - cd tsfile/ - mvn clean package -Dmaven.test.skip=true - ``` - Then, all the jars can be get in folder named `target/`. Import `target/tsfile-0.9.0-SNAPSHOT-jar-with-dependencies.jar` to your project. - -* Using as a maven dependency: - - Compile source codes and deploy to your local repository in three steps: - - * Get the source codes - - ``` - git clone https://github.com/apache/incubator-iotdb.git - ``` - * Compile the source codes and deploy - - ``` - cd tsfile/ - mvn clean install -Dmaven.test.skip=true - ``` - * add dependencies into your project: - - ``` - - org.apache.iotdb - tsfile - 0.9.0-SNAPSHOT - - ``` - - Or, you can download the dependencies from official Maven repository: - - * First, find your maven `settings.xml` on path: `${username}\.m2\settings.xml` - , add this `` to ``: - ``` - - allow-snapshots - true - - - apache.snapshots - Apache Development Snapshot Repository - https://repository.apache.org/content/repositories/snapshots/ - - false - - - true - - - - - ``` - * Then add dependencies into your project: - - ``` - - org.apache.iotdb - tsfile - 0.9.0-SNAPSHOT - - ``` diff --git a/docs/Documentation/UserGuide/3-Server/1-Download.md b/docs/Documentation/UserGuide/3-Server/1-Download.md new file mode 100644 index 0000000000000000000000000000000000000000..cabdd7f24af16fb4d5c3c2d920cbe19704832882 --- /dev/null +++ b/docs/Documentation/UserGuide/3-Server/1-Download.md @@ -0,0 +1,75 @@ + + +# Chapter 3: Server + +## Download + +IoTDB provides you three installation methods, you can refer to the following suggestions, choose one of them: + +* Installation from source code. If you need to modify the code yourself, you can use this method. +* Installation from binary files. Download the binary files from the official website. This is the recommended method, in which you will get a binary released package which is out-of-the-box.(Comming Soon...) +* Using Docker:The path to the dockerfile is https://github.com/apache/incubator-iotdb/blob/master/docker/Dockerfile + +### Prerequisites + +To use IoTDB, you need to have: + +1. Java >= 1.8 (Please make sure the environment path has been set) +2. Maven >= 3.1 (Optional) +3. Set the max open files num as 65535 to avoid "too many open files" problem. + +>Note: If you don't have maven installed, you should replace 'mvn' in the following commands with 'mvnw.sh' or 'mvnw.cmd'. +### Installation from binary files + +You can download the binary file from: +[Here](https://iotdb.apache.org/#/Download) + +### Installation from source code + +You can get the released source code from https://iotdb.apache.org/#/Download, or from the git repository https://github.com/apache/incubator-iotdb/tree/master +You can download the source code from: + +``` +git clone https://github.com/apache/incubator-iotdb.git +``` + +Under the root path of incubator-iotdb: + +``` +> mvn clean package -DskipTests +``` + +Then the binary version (including both server and client) can be found at **distribution/target/apache-iotdb-{project.version}-incubating-bin.zip** + +> NOTE: Directories "service-rpc/target/generated-sources/thrift" and "server/target/generated-sources/antlr3" need to be added to sources roots to avoid compilation errors in IDE. + +If you would like to build the IoTDB server, you can run the following command under the root path of incubator-iotdb: + +``` +> mvn clean package -pl server -am -DskipTests +``` + +After build, the IoTDB server will be at the folder "server/target/iotdb-server-{project.version}". + +### Installation by Docker (Dockerfile) + +You can build and run a IoTDB docker image by following the guide of [Deployment by Docker](/#/Documents/progress/chap3/sec5) diff --git a/docs/Documentation/UserGuide/3-Server/2-Single Node Setup.md b/docs/Documentation/UserGuide/3-Server/2-Single Node Setup.md new file mode 100644 index 0000000000000000000000000000000000000000..28dabc619ade6caf74bc5382c005a239cd1a98fc --- /dev/null +++ b/docs/Documentation/UserGuide/3-Server/2-Single Node Setup.md @@ -0,0 +1,32 @@ + +# Chapter 3: Server +## Single Node Setup + +Users can start IoTDB by the start-server script under the sbin folder. + +``` +# Unix/OS X +> sbin/start-server.sh + +# Windows +> sbin\start-server.bat +``` \ No newline at end of file diff --git a/docs/Documentation/UserGuide/3-Server/3-Cluster Setup.md b/docs/Documentation/UserGuide/3-Server/3-Cluster Setup.md new file mode 100644 index 0000000000000000000000000000000000000000..57a1ee5b7da6251f4e65901a03c70e832807ce42 --- /dev/null +++ b/docs/Documentation/UserGuide/3-Server/3-Cluster Setup.md @@ -0,0 +1,24 @@ + +# Chapter 3: Server +## Cluster Setup + +Comming soon... diff --git a/docs/Documentation/UserGuide/3-Deployment/2-Configuration.md b/docs/Documentation/UserGuide/3-Server/4-Config Manual.md similarity index 98% rename from docs/Documentation/UserGuide/3-Deployment/2-Configuration.md rename to docs/Documentation/UserGuide/3-Server/4-Config Manual.md index 7fbfffad46a04df7731bcc22e7779f12c0ddf743..c3a00fa912f2e1d34f09f44ae395042ae89bec8c 100644 --- a/docs/Documentation/UserGuide/3-Deployment/2-Configuration.md +++ b/docs/Documentation/UserGuide/3-Server/4-Config Manual.md @@ -19,16 +19,16 @@ --> -# Chapter 3: Deployment and Management +# Chapter 3: Server -## Configuration +## Config Manual Before starting to use IoTDB, you need to config the configuration files first. For your convenience, we have already set the default config in the files. In total, we provide users three kinds of configurations module: -* environment configuration file (`iotdb-env.bat`, `iotdb-env.sh`). The default configuration file for the environment configuration item. Users can configure the relevant system configuration items of JAVA-JVM and set environment variable in the file. +* environment configuration file (`iotdb-env.bat`, `iotdb-env.sh`). The default configuration file for the environment configuration item. Users can configure the relevant system configuration items of JAVA-JVM in the file. * system configuration file (`tsfile-format.properties`, `iotdb-engine.properties`). * `tsfile-format.properties`: The default configuration file for the IoTDB file layer configuration item. Users can configure the information about the TsFile, such as the data size written to the disk per time(`group_size_in_byte`). * `iotdb-engine.properties`: The default configuration file for the IoTDB engine layer configuration item. Users can configure the IoTDB engine related parameters in the file, such as JDBC service listening port (`rpc_port`), unsequence data storage directory (`unsequence_data_dir`), etc. diff --git a/docs/Documentation/UserGuide/3-Deployment/3-Build and use IoTDB by Dockerfile.md b/docs/Documentation/UserGuide/3-Server/5-Docker Image.md similarity index 98% rename from docs/Documentation/UserGuide/3-Deployment/3-Build and use IoTDB by Dockerfile.md rename to docs/Documentation/UserGuide/3-Server/5-Docker Image.md index e3be6e0a900042f9aabcc9ca50100ef1b7dfa893..541761edf0a354c1c2e578ec477eae98127262a2 100644 --- a/docs/Documentation/UserGuide/3-Deployment/3-Build and use IoTDB by Dockerfile.md +++ b/docs/Documentation/UserGuide/3-Server/5-Docker Image.md @@ -19,9 +19,9 @@ --> -# Chapter 3: Deployment +# Chapter 3: Server -## Build and use IoTDB by Dockerfile +## Docker Image Now a Dockerfile has been written at ROOT/docker/Dockerfile on the branch enable_docker_image. 1. You can build a docker image by: diff --git a/docs/Documentation/UserGuide/4-Operation Manual/1-Cli Shell Tool.md b/docs/Documentation/UserGuide/4-Client/1-Command Line Interface (Cli).md similarity index 94% rename from docs/Documentation/UserGuide/4-Operation Manual/1-Cli Shell Tool.md rename to docs/Documentation/UserGuide/4-Client/1-Command Line Interface (Cli).md index a4724285592d55a725484723b1e25889b34797ed..b67ef0d9045b53ab35000934f87c063b370f2235 100644 --- a/docs/Documentation/UserGuide/4-Operation Manual/1-Cli Shell Tool.md +++ b/docs/Documentation/UserGuide/4-Client/1-Command Line Interface (Cli).md @@ -18,21 +18,30 @@ under the License. --> - - -# Chapter 4: Operation Manual +# Chapter 4: Client ## Outline -- Cli/shell tool + +- CCommand Line Interface(CLI) - Running Cli/Shell - Cli/Shell Parameters - Cli/shell tool with -e parameter -# Cli/shell tool +# Command Line Interface(CLI) IoTDB provides Cli/shell tools for users to interact with IoTDB server in command lines. This document will show how Cli/shell tool works and what does it parameters mean. > Note: In this document, \$IOTDB\_HOME represents the path of the IoTDB installation directory. +## Build client from source code + +Under the root path of incubator-iotdb: + +``` +> mvn clean package -pl client -am -DskipTests +``` + +After build, the IoTDB client will be at the folder "client/target/iotdb-client-{project.version}". + ## Running Cli/Shell After installation, there is a default user in IoTDB: `root`, and the diff --git a/docs/Documentation/UserGuide/6-API/1-JDBC API.md b/docs/Documentation/UserGuide/4-Client/2-Programming - JDBC.md similarity index 99% rename from docs/Documentation/UserGuide/6-API/1-JDBC API.md rename to docs/Documentation/UserGuide/4-Client/2-Programming - JDBC.md index 8c0982f987cbf88d71424c55d63d8c9980249462..07395ddfe3a21c8f5aa9090c7d30fb04bf12943f 100644 --- a/docs/Documentation/UserGuide/6-API/1-JDBC API.md +++ b/docs/Documentation/UserGuide/4-Client/2-Programming - JDBC.md @@ -19,8 +19,8 @@ --> -# Chapter6: API -# JDBC API +# Chapter4: Client +# Programming - JDBC ## Usage diff --git a/docs/Documentation/UserGuide/6-API/2-Session API.md b/docs/Documentation/UserGuide/4-Client/3-Programming - Session.md similarity index 98% rename from docs/Documentation/UserGuide/6-API/2-Session API.md rename to docs/Documentation/UserGuide/4-Client/3-Programming - Session.md index 818dc2cd2528c3b5d914d19ed25781422dc4b4ca..64dc98c1282d3876acfb3f1358efa37985a7ea0b 100644 --- a/docs/Documentation/UserGuide/6-API/2-Session API.md +++ b/docs/Documentation/UserGuide/4-Client/3-Programming - Session.md @@ -19,8 +19,8 @@ --> -# Chapter6: API -# Session API +# Chapter 4: Client +# Programming - Session ## Usage ### Dependencies diff --git a/docs/Documentation/UserGuide/6-API/3-Python API.md b/docs/Documentation/UserGuide/4-Client/4-Programming - Other Language.md similarity index 93% rename from docs/Documentation/UserGuide/6-API/3-Python API.md rename to docs/Documentation/UserGuide/4-Client/4-Programming - Other Language.md index d9f1015acc7b3de8cb7ab4126758f1ff02ff9760..1c0773e01b747b8c846b0442571953a1efee301f 100644 --- a/docs/Documentation/UserGuide/6-API/3-Python API.md +++ b/docs/Documentation/UserGuide/4-Client/4-Programming - Other Language.md @@ -18,14 +18,15 @@ under the License. --> -# Chapter6: API +# Chapter 4: Client -# Python API -# Introduction +# Programming - Other Language +## Python API +### Introduction This is an example of how to connect to IoTDB with python, using the thrift rpc interfaces. Things will be a bit different on Linux or Windows, we will introduce how to operate on the two systems separately. -## Prerequisites +### Prerequisites python3.7 or later is preferred. You have to install Thrift (0.11.0 or later) to compile our thrift file into python code. Below is the official @@ -34,11 +35,11 @@ tutorial of installation: http://thrift.apache.org/docs/install/ ``` -## Compile +### Compile If you have added Thrift executable into your path, you may just run `compile.sh` or `compile.bat`, or you will have to modify it to set variable `THRIFT_EXE` to point to your executable. This will generate thrift sources under folder `target`, you can add it to your `PYTHONPATH` so that you would be able to use the library in your code. -## Example +### Example We provided an example of how to use the thrift library to connect to IoTDB in `src\client_example.py`, please read it carefully before you write your own code. diff --git a/docs/Documentation/UserGuide/4-Operation Manual/8-TsFile Usage.md b/docs/Documentation/UserGuide/4-Client/5-Programming - TsFile API (TimeSeries File Format).md similarity index 64% rename from docs/Documentation/UserGuide/4-Operation Manual/8-TsFile Usage.md rename to docs/Documentation/UserGuide/4-Client/5-Programming - TsFile API (TimeSeries File Format).md index b202474cfc4c2f8ebf0ef8af28180d1076520d67..1aae31e9bce89f3a632efce840f9e9f7b9b04f6a 100644 --- a/docs/Documentation/UserGuide/4-Operation Manual/8-TsFile Usage.md +++ b/docs/Documentation/UserGuide/4-Client/5-Programming - TsFile API (TimeSeries File Format).md @@ -19,7 +19,81 @@ --> -# Chapter 4: Operation Manual +# Chapter 4: Client +# Programming - TsFile API (TimeSerices File Format) + +## TsFile libaray Installation + + +There are two ways to use TsFile in your own project. + +* Using as jars: + * Compile the source codes and build to jars + + ``` + git clone https://github.com/apache/incubator-iotdb.git + cd tsfile/ + mvn clean package -Dmaven.test.skip=true + ``` + Then, all the jars can be get in folder named `target/`. Import `target/tsfile-0.9.0-SNAPSHOT-jar-with-dependencies.jar` to your project. + +* Using as a maven dependency: + + Compile source codes and deploy to your local repository in three steps: + + * Get the source codes + + ``` + git clone https://github.com/apache/incubator-iotdb.git + ``` + * Compile the source codes and deploy + + ``` + cd tsfile/ + mvn clean install -Dmaven.test.skip=true + ``` + * add dependencies into your project: + + ``` + + org.apache.iotdb + tsfile + 0.9.0-SNAPSHOT + + ``` + + Or, you can download the dependencies from official Maven repository: + + * First, find your maven `settings.xml` on path: `${username}\.m2\settings.xml` + , add this `` to ``: + ``` + + allow-snapshots + true + + + apache.snapshots + Apache Development Snapshot Repository + https://repository.apache.org/content/repositories/snapshots/ + + false + + + true + + + + + ``` + * Then add dependencies into your project: + + ``` + + org.apache.iotdb + tsfile + 0.9.0-SNAPSHOT + + ``` ## TSFile Usage This section demonstrates the detailed usages of TsFile. @@ -39,13 +113,13 @@ with three measurements named "sensor\_1", "sensor\_2" and "sensor\_3".
- - - - - - + + + + + +
device_1
sensor_1sensor_2sensor_3
timevaluetimevaluetimevalue -
11.2120250
31.4220451
51.1321652
71.8420853
device_1
sensor_1sensor_2sensor_3
timevaluetimevaluetimevalue +
11.2120250
31.4220451
51.1321652
71.8420853
A set of time-series data
@@ -83,13 +157,13 @@ A TsFile can be generated by following three steps and the complete code will be public TsFileWriter(File file) throws IOException ``` * With pre-defined schema - ``` - public TsFileWriter(File file, Schema schema) throws IOException - ``` - This one is for using the HDFS file system. `TsFileOutput` can be an instance of class `HDFSOutput`. - - ``` - public TsFileWriter(TsFileOutput output, Schema schema) throws IOException + ``` + public TsFileWriter(File file, Schema schema) throws IOException + ``` + This one is for using the HDFS file system. `TsFileOutput` can be an instance of class `HDFSOutput`. + + ``` + public TsFileWriter(TsFileOutput output, Schema schema) throws IOException ``` If you want to set some TSFile configuration on your own, you could use param `config`. For example: @@ -102,53 +176,53 @@ A TsFile can be generated by following three steps and the complete code will be You can also config the ip and port of your HDFS by `config.setHdfsIp(...)` and `config.setHdfsPort(...)`. The default ip is `localhost` and default port is `9000`. - **Parameters:** - - * file : The TsFile to write - - * schema : The file schemas, will be introduced in next part. - - * config : The config of TsFile. + **Parameters:** + + * file : The TsFile to write + + * schema : The file schemas, will be introduced in next part. + + * config : The config of TsFile. * Second, add measurements - - Or you can make an instance of class `Schema` first and pass this to the constructor of class `TsFileWriter` - - The class `Schema` contains a map whose key is the name of one measurement schema, and the value is the schema itself. - - Here are the interfaces: - ``` - // Create an empty Schema or from an existing map - public Schema() - public Schema(Map measurements) - // Use this two interfaces to add measurements - public void registerMeasurement(MeasurementSchema descriptor) - public void registerMeasurements(Map measurements) - // Some useful getter and checker - public TSDataType getMeasurementDataType(String measurementId) - public MeasurementSchema getMeasurementSchema(String measurementId) - public Map getAllMeasurementSchema() - public boolean hasMeasurement(String measurementId) - ``` - - You can always use the following interface in `TsFileWriter` class to add additional measurements: + + Or you can make an instance of class `Schema` first and pass this to the constructor of class `TsFileWriter` + + The class `Schema` contains a map whose key is the name of one measurement schema, and the value is the schema itself. + + Here are the interfaces: + ``` + // Create an empty Schema or from an existing map + public Schema() + public Schema(Map measurements) + // Use this two interfaces to add measurements + public void registerMeasurement(MeasurementSchema descriptor) + public void registerMeasurements(Map measurements) + // Some useful getter and checker + public TSDataType getMeasurementDataType(String measurementId) + public MeasurementSchema getMeasurementSchema(String measurementId) + public Map getAllMeasurementSchema() + public boolean hasMeasurement(String measurementId) + ``` + + You can always use the following interface in `TsFileWriter` class to add additional measurements: ``` public void addMeasurement(MeasurementSchema measurementSchema) throws WriteProcessException ``` - - The class `MeasurementSchema` contains the information of one measurement, there are several constructors: - ``` - public MeasurementSchema(String measurementId, TSDataType type, TSEncoding encoding) - public MeasurementSchema(String measurementId, TSDataType type, TSEncoding encoding, CompressionType compressionType) - public MeasurementSchema(String measurementId, TSDataType type, TSEncoding encoding, CompressionType compressionType, - Map props) + + The class `MeasurementSchema` contains the information of one measurement, there are several constructors: + ``` + public MeasurementSchema(String measurementId, TSDataType type, TSEncoding encoding) + public MeasurementSchema(String measurementId, TSDataType type, TSEncoding encoding, CompressionType compressionType) + public MeasurementSchema(String measurementId, TSDataType type, TSEncoding encoding, CompressionType compressionType, + Map props) ``` **Parameters:** - + * measurementID: The name of this measurement, typically the name of the sensor. - + * type: The data type, now support six types: `BOOLEAN`, `INT32`, `INT64`, `FLOAT`, `DOUBLE`, `TEXT`; * encoding: The data encoding. See [Chapter 2-3](/#/Documents/progress/chap2/sec3). @@ -157,37 +231,37 @@ A TsFile can be generated by following three steps and the complete code will be * props: Properties for special data types.Such as `max_point_number` for `FLOAT` and `DOUBLE`, `max_string_length` for `TEXT`. Use as string pairs into a map such as ("max_point_number", "3"). - + > **Notice:** Although one measurement name can be used in multiple deltaObjects, the properties cannot be changed. I.e. it's not allowed to add one measurement name for multiple times with different type or encoding. Here is a bad example: - // The measurement "sensor_1" is float type - addMeasurement(new MeasurementSchema("sensor_1", TSDataType.FLOAT, TSEncoding.RLE)); - - // This call will throw a WriteProcessException exception - addMeasurement(new MeasurementSchema("sensor_1", TSDataType.INT32, TSEncoding.RLE)); + // The measurement "sensor_1" is float type + addMeasurement(new MeasurementSchema("sensor_1", TSDataType.FLOAT, TSEncoding.RLE)); + + // This call will throw a WriteProcessException exception + addMeasurement(new MeasurementSchema("sensor_1", TSDataType.INT32, TSEncoding.RLE)); * Third, insert and write data continually. - - Use this interface to create a new `TSRecord`(a timestamp and device pair). - - ``` - public TSRecord(long timestamp, String deviceId) - ``` - Then create a `DataPoint`(a measurement and value pair), and use the addTuple method to add the DataPoint to the correct - TsRecord. - - Use this method to write - - ``` - public void write(TSRecord record) throws IOException, WriteProcessException - ``` - + + Use this interface to create a new `TSRecord`(a timestamp and device pair). + + ``` + public TSRecord(long timestamp, String deviceId) + ``` + Then create a `DataPoint`(a measurement and value pair), and use the addTuple method to add the DataPoint to the correct + TsRecord. + + Use this method to write + + ``` + public void write(TSRecord record) throws IOException, WriteProcessException + ``` + * Finally, call `close` to finish this writing process. - - ``` - public void close() throws IOException - ``` + + ``` + public void close() throws IOException + ``` #### Example for writing a TsFile @@ -348,13 +422,13 @@ The set of time-series data in section "Time-series Data" is used here for a con
- - - - - - + + + + + +
device_1
sensor_1sensor_2sensor_3
timevaluetimevaluetimevalue -
11.2120250
31.4220451
51.1321652
71.8420853
device_1
sensor_1sensor_2sensor_3
timevaluetimevaluetimevalue +
11.2120250
31.4220451
51.1321652
71.8420853
A set of time-series data
@@ -395,75 +469,75 @@ The `IExpression` is a filter expression interface and it will be passed to our We create one or more filter expressions and may use binary filter operators to link them to our final expression. * **Create a Filter Expression** - - There are two types of filters. - - * TimeFilter: A filter for `time` in time-series data. - ``` - IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter); - ``` - Use the following relationships to get a `TimeFilter` object (value is a long int variable). -
+ + There are two types of filters. + + * TimeFilter: A filter for `time` in time-series data. + ``` + IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter); + ``` + Use the following relationships to get a `TimeFilter` object (value is a long int variable). +
- - - - - - - - + + + + + + + +
RelationshipDescription
TimeFilter.eq(value)Choose the time equal to the value
TimeFilter.lt(value)Choose the time less than the value
TimeFilter.gt(value)Choose the time greater than the value
TimeFilter.ltEq(value)Choose the time less than or equal to the value
TimeFilter.gtEq(value)Choose the time greater than or equal to the value
TimeFilter.notEq(value)Choose the time not equal to the value
TimeFilter.not(TimeFilter)Choose the time not satisfy another TimeFilter
RelationshipDescription
TimeFilter.eq(value)Choose the time equal to the value
TimeFilter.lt(value)Choose the time less than the value
TimeFilter.gt(value)Choose the time greater than the value
TimeFilter.ltEq(value)Choose the time less than or equal to the value
TimeFilter.gtEq(value)Choose the time greater than or equal to the value
TimeFilter.notEq(value)Choose the time not equal to the value
TimeFilter.not(TimeFilter)Choose the time not satisfy another TimeFilter
- - * ValueFilter: A filter for `value` in time-series data. - - ``` - IExpression valueFilterExpr = new SingleSeriesExpression(Path, ValueFilter); - ``` - The usage of `ValueFilter` is the same as using `TimeFilter`, just to make sure that the type of the value - equal to the measurement's(defined in the path). + + * ValueFilter: A filter for `value` in time-series data. + + ``` + IExpression valueFilterExpr = new SingleSeriesExpression(Path, ValueFilter); + ``` + The usage of `ValueFilter` is the same as using `TimeFilter`, just to make sure that the type of the value + equal to the measurement's(defined in the path). * **Binary Filter Operators** - Binary filter operators can be used to link two single expressions. + Binary filter operators can be used to link two single expressions. - * BinaryExpression.and(Expression, Expression): Choose the value satisfy for both expressions. - * BinaryExpression.or(Expression, Expression): Choose the value satisfy for at least one expression. - + * BinaryExpression.and(Expression, Expression): Choose the value satisfy for both expressions. + * BinaryExpression.or(Expression, Expression): Choose the value satisfy for at least one expression. + ##### Filter Expression Examples * **TimeFilterExpression Examples** - ``` - IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.eq(15)); // series time = 15 + ``` + IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.eq(15)); // series time = 15 - ``` - ``` - IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.ltEq(15)); // series time <= 15 + ``` + ``` + IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.ltEq(15)); // series time <= 15 - ``` - ``` - IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.lt(15)); // series time < 15 + ``` + ``` + IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.lt(15)); // series time < 15 - ``` - ``` - IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.gtEq(15)); // series time >= 15 + ``` + ``` + IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.gtEq(15)); // series time >= 15 - ``` - ``` - IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.notEq(15)); // series time != 15 + ``` + ``` + IExpression timeFilterExpr = new GlobalTimeExpression(TimeFilter.notEq(15)); // series time != 15 - ``` - ``` - IExpression timeFilterExpr = BinaryExpression.and(new GlobalTimeExpression(TimeFilter.gtEq(15L)), + ``` + ``` + IExpression timeFilterExpr = BinaryExpression.and(new GlobalTimeExpression(TimeFilter.gtEq(15L)), new GlobalTimeExpression(TimeFilter.lt(25L))); // 15 <= series time < 25 - ``` - ``` - IExpression timeFilterExpr = BinaryExpression.or(new GlobalTimeExpression(TimeFilter.gtEq(15L)), + ``` + ``` + IExpression timeFilterExpr = BinaryExpression.or(new GlobalTimeExpression(TimeFilter.gtEq(15L)), new GlobalTimeExpression(TimeFilter.lt(25L))); // series time >= 15 or series time < 25 - ``` + ``` #### Read Interface First, we open the TsFile and get a `ReadOnlyTsFile` instance from a file path string `path`. @@ -482,25 +556,25 @@ QueryExpression queryExpression = QueryExpression.create(paths, statement); The ReadOnlyTsFile class has two `query` method to perform a query. * **Method 1** - ``` - public QueryDataSet query(QueryExpression queryExpression) throws IOException - ``` + ``` + public QueryDataSet query(QueryExpression queryExpression) throws IOException + ``` * **Method 2** - ``` - public QueryDataSet query(QueryExpression queryExpression, long partitionStartOffset, long partitionEndOffset) throws IOException - ``` + ``` + public QueryDataSet query(QueryExpression queryExpression, long partitionStartOffset, long partitionEndOffset) throws IOException + ``` - This method is designed for advanced applications such as the TsFile-Spark Connector. + This method is designed for advanced applications such as the TsFile-Spark Connector. - * **params** : For method 2, two additional parameters are added to support partial query: - * ```partitionStartOffset```: start offset for a TsFile - * ```partitionEndOffset```: end offset for a TsFile + * **params** : For method 2, two additional parameters are added to support partial query: + * ```partitionStartOffset```: start offset for a TsFile + * ```partitionEndOffset```: end offset for a TsFile - > **What is Partial Query ?** - > - > In some distributed file systems(e.g. HDFS), a file is split into severval parts which are called "Blocks" and stored in different nodes. Executing a query paralleled in each nodes involved makes better efficiency. Thus Partial Query is needed. Paritial Query only selects the results stored in the part split by ```QueryConstant.PARTITION_START_OFFSET``` and ```QueryConstant.PARTITION_END_OFFSET``` for a TsFile. + > **What is Partial Query ?** + > + > In some distributed file systems(e.g. HDFS), a file is split into severval parts which are called "Blocks" and stored in different nodes. Executing a query paralleled in each nodes involved makes better efficiency. Thus Partial Query is needed. Paritial Query only selects the results stored in the part split by ```QueryConstant.PARTITION_START_OFFSET``` and ```QueryConstant.PARTITION_END_OFFSET``` for a TsFile. ### QueryDataset Interface @@ -629,7 +703,6 @@ If you want to learn more about its mechanism, you can refer to: [wiki page of b you can control the false positive rate of bloom filter by the following parameter in the config file `tsfile-format.properties` which located at `/server/src/assembly/resources/conf` directory ``` # The acceptable error rate of bloom filter, should be in [0.01, 0.1], default is 0.05 - bloom_filter_error_rate=0.05 ``` diff --git a/docs/Documentation/UserGuide/4-Operation Manual/3-Data Import.md b/docs/Documentation/UserGuide/4-Operation Manual/3-Data Import.md deleted file mode 100644 index 59bba5d625df1487150b6dd7ee094f93d7342e0d..0000000000000000000000000000000000000000 --- a/docs/Documentation/UserGuide/4-Operation Manual/3-Data Import.md +++ /dev/null @@ -1,87 +0,0 @@ - - -# Chapter 4: Operation Manual - -## Data Import -### Import Historical Data - -This feature is not supported in version 0.7.0. - -### Import Real-time Data - -IoTDB provides users with a variety of ways to insert real-time data, such as directly inputting [INSERT SQL statement](/#/Documents/progress/chap4/sec7) in [Client/Shell tools](/#/Tools/Client), or using [Java JDBC](/#/Documents/progress/chap4/sec7) to perform single or batch execution of [INSERT SQL statement](/#/Documents/progress/chap4/sec7). - -This section mainly introduces the use of [INSERT SQL statement](/#/Documents/progress/chap4/sec7) for real-time data import in the scenario. See Section 4.7 for a detailed syntax of [INSERT SQL statement](/#/Documents/progress/chap4/sec7). - -#### Use of INSERT Statements -The [INSERT SQL statement](/#/Documents/progress/chap4/sec7) statement can be used to insert data into one or more specified timeseries that have been created. For each point of data inserted, it consists of a [timestamp](/#/Documents/progress/chap2/sec1) and a sensor acquisition value of a numerical type (see [Data Type](/#/Documents/progress/chap2/sec2)). - -In the scenario of this section, take two timeseries `root.ln.wf02.wt02.status` and `root.ln.wf02.wt02.hardware` as an example, and their data types are BOOLEAN and TEXT, respectively. - -The sample code for single column data insertion is as follows: -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) -IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1, "v1") -``` - -The above example code inserts the long integer timestamp and the value "true" into the timeseries `root.ln.wf02.wt02.status` and inserts the long integer timestamp and the value "v1" into the timeseries `root.ln.wf02.wt02.hardware`. When the execution is successful, cost time is shown to indicate that the data insertion has been completed. - -> Note: In IoTDB, TEXT type data can be represented by single and double quotation marks. The insertion statement above uses double quotation marks for TEXT type data. The following example will use single quotation marks for TEXT type data. - -The INSERT statement can also support the insertion of multi-column data at the same time point. The sample code of inserting the values of the two timeseries at the same time point '2' is as follows: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp, status, hardware) VALUES (2, false, 'v2') -``` - -After inserting the data, we can simply query the inserted data using the SELECT statement: - -``` -IoTDB > select * from root.ln.wf02 where time < 3 -``` - -The result is shown below. From the query results, it can be seen that the insertion statements of single column and multi column data are performed correctly. - -
- -### Error Handling of INSERT Statements -If the user inserts data into a non-existent timeseries, for example, execute the following commands: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp, temperature) values(1,"v1") -``` - -Because `root.ln.wf02.wt02. temperature` does not exist, the system will return the following ERROR information: - -``` -Msg: Current deviceId[root.ln.wf02.wt02] does not contains measurement:temperature -``` -If the data type inserted by the user is inconsistent with the corresponding data type of the timeseries, for example, execute the following command: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1,100) -``` -The system will return the following ERROR information: - -``` -error: The TEXT data type should be covered by " or ' -``` diff --git a/docs/Documentation/UserGuide/4-Operation Manual/5-Data Maintenance.md b/docs/Documentation/UserGuide/4-Operation Manual/5-Data Maintenance.md deleted file mode 100644 index cb2267db7869c803a73f7b340e829d25d9dd0e6b..0000000000000000000000000000000000000000 --- a/docs/Documentation/UserGuide/4-Operation Manual/5-Data Maintenance.md +++ /dev/null @@ -1,86 +0,0 @@ - - -# Chapter 4: Operation Manual - -## Data Maintenance - - - -### Data Deletion - -Users can delete data that meet the deletion condition in the specified timeseries by using the [DELETE statement](/#/Documents/progress/chap4/sec7). When deleting data, users can select one or more timeseries paths, prefix paths, or paths with star to delete data before a certain time (version 0.8.0 does not support the deletion of data within a closed time interval). - -In a JAVA programming environment, you can use the [Java JDBC](/#/Documents/progress/chap6/sec1) to execute single or batch UPDATE statements. - -#### Delete Single Timeseries -Taking ln Group as an example, there exists such a usage scenario: - -The wf02 plant's wt02 device has many segments of errors in its power supply status before 2017-11-01 16:26:00, and the data cannot be analyzed correctly. The erroneous data affected the correlation analysis with other devices. At this point, the data before this time point needs to be deleted. The SQL statement for this operation is - -``` -delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; -``` - -#### Delete Multiple Timeseries -When both the power supply status and hardware version of the ln group wf02 plant wt02 device before 2017-11-01 16:26:00 need to be deleted, [the prefix path with broader meaning or the path with star](/#/Documents/progress/chap2/sec1) can be used to delete the data. The SQL statement for this operation is: - -``` -delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; -``` -or - -``` -delete from root.ln.wf02.wt02.* where time <= 2017-11-01T16:26:00; -``` -It should be noted that when the deleted path does not exist, IoTDB will give the corresponding error prompt as shown below: - -``` -IoTDB> delete from root.ln.wf03.wt02.status where time < now() -Msg: TimeSeries does not exist and its data cannot be deleted -``` diff --git a/docs/Documentation/UserGuide/5-Management/2-Performance Monitor.md b/docs/Documentation/UserGuide/5-Management/2-Performance Monitor.md deleted file mode 100644 index dfb80e9d28f785dc1788dfe4bfab0291aa65562c..0000000000000000000000000000000000000000 --- a/docs/Documentation/UserGuide/5-Management/2-Performance Monitor.md +++ /dev/null @@ -1,90 +0,0 @@ - - -# Chapter 5: Management - -## Performance Monitor - -### Introduction - -In order to grasp the performance of iotdb, we add this module to count the time-consuming of each operation. This module can statistic the avg time-consuming of each operation and the proportion of each operation fall into a time range. The output is in log_measure.log file. A output example is in below.   - - - -### Configuration parameter - -location:conf/iotdb-engine.properties - -
**Table -parameter and description** - -|Parameter|Default Value|Description| -|:---|:---|:---| -|enable\_performance\_stat|false|Is stat performance of sub-module enable.| -|performance\_stat\_display\_interval|60000|The interval of display statistic result in ms.| -|performance_stat_memory_in_kb|20|The memory used for performance_stat in kb.| -
- -### JMX MBean - -Connect to jconsole with port 31999,and choose ‘MBean’in menu bar. Expand the sidebar and choose 'org.apache.iotdb.db.cost.statistic'. You can Find: - - - -**Attribute** - -1. EnableStat:Whether the statistics are enable or not, if it is true, the module records the time-consuming of each operation and prints the results; It can not be set dynamically but changed by function in below. - -2. DisplayIntervalInMs:The interval between print results. It can be set dynamically, but will take effect after restart.( First call stopStatistic(), then call startContinuousStatistics() or startOneTimeStatistics()) -3. OperationSwitch:It's a map to indicate whether stat the operation, the key is operation name and the value is stat state. This parameter cannot be changed directly, it's change by operation 'changeOperationSwitch()'. - -**Operation** - -1. startContinuousStatistics: Start the statistics and output at interval of ‘DisplayIntervalInMs’. -2. startOneTimeStatistics:Start the statistics and output in delay of ‘DisplayIntervalInMs’. -3. stopStatistic:Stop the statistics. -4. clearStatisticalState(): clear current stat result, reset statistical result. -5. changeOperationSwitch(String operationName, Boolean operationState):set whether to monitor operation status. The param 'operationName' is the name of operation, defined in attribute operationSwitch. The param operationState is the state of operation. If state-switch successful the function will return true, else return false. - -### Adding Custom Monitoring Items for developer of IOTDB - -**Add Operation** - -Add an enumeration in org.apache.iotdb.db.cost.statistic.Operation. - -**Add Timing Code in Monitoring Area** - -Add timing code in the monitoring start area: - - long t0 = System. currentTimeMillis(); - -Add timing code in the monitoring stop area: - - Measurement.INSTANCE.addOperationLatency(Operation, t0); - -## Cache Hit Ratio Statistics - -### Overview - -To improve query performance, IOTDB caches ChunkMetaData and TsFileMetaData. Users can view the cache hit rate through debug level log and MXBean, and adjust the memory occupied by the cache according to the cache hit rate and system memory. The method of using MXBean to view cache hit ratio is as follows: -1. Connect to jconsole with port 31999 and select 'MBean' in the menu item above. -2. Expand the sidebar and select 'org.apache.iotdb.db.service'. You will get the results shown in the following figure: - - \ No newline at end of file diff --git a/docs/Documentation/UserGuide/5-Management/3-System log.md b/docs/Documentation/UserGuide/5-Management/3-System log.md deleted file mode 100644 index d945a0eadd7d0cac3182c7812b2278728fea664e..0000000000000000000000000000000000000000 --- a/docs/Documentation/UserGuide/5-Management/3-System log.md +++ /dev/null @@ -1,66 +0,0 @@ - - -# Chapter 5: Management - -## System log - -IoTDB allows users to configure IoTDB system logs (such as log output level) by modifying the log configuration file. The default location of the system log configuration file is in \$IOTDB_HOME/conf folder. - -The default log configuration file is named logback.xml. The user can modify the configuration of the system running log by adding or changing the xml tree node parameters. It should be noted that the configuration of the system log using the log configuration file does not take effect immediately after the modification, instead, it will take effect after restarting the system. The usage of logback.xml is just as usual. - -At the same time, in order to facilitate the debugging of the system by the developers and DBAs, we provide several JMX interface to dynamically modify the log configuration, and configure the Log module of the system in real time without restarting the system. For detailed usage, see [Dynamic System Log Configuration](/#/Documents/progress/chap5/sec4) section. - -### Dynamic System Log Configuration - -#### Connect JMX - -Here we use JConsole to connect with JMX. - -Start the JConsole, establish a new JMX connection with the IoTDB Server (you can select the local process or input the IP and PORT for remote connection, the default operation port of the IoTDB JMX service is 31999). Fig 4.1 shows the connection GUI of JConsole. - - - -After connected, click `MBean` and find `ch.qos.logback.classic.default.ch.qos.logback.classic.jmx.JMXConfigurator`(As shown in fig 4.2). - - -In the JMXConfigurator Window, there are 6 operation provided for you, as shown in fig 4.3. You can use there interface to perform operation. - - - -#### Interface Instruction - -* reloadDefaultConfiguration - -This method is to reload the default logback configuration file. The user can modify the default configuration file first, and then call this method to reload the modified configuration file into the system to take effect. - -* reloadByFileName - -This method loads a logback configuration file with the specified path and name, and then makes it take effect. This method accepts a parameter of type String named p1, which is the path to the configuration file that needs to be specified for loading. - -* getLoggerEffectiveLevel - -This method is to obtain the current log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level currently in effect for the specified Logger. - -* getLoggerLevel - -This method is to obtain the log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level of the specified Logger. -It should be noted that the difference between this method and the `getLoggerEffectiveLevel` method is that the method returns the log level that the specified Logger is set in the configuration file. If the user does not set the log level for the Logger. , then return empty. According to Logre's log-level inheritance mechanism, if a Logger is not displayed to set the log level, it will inherit the log level settings from its nearest ancestor. At this point, calling the `getLoggerEffectiveLevel` method will return the log level in which the Logger is in effect; calling the methods described in this section will return null. diff --git a/docs/Documentation/UserGuide/4-Operation Manual/2-Data Model Selection.md b/docs/Documentation/UserGuide/5-Operation Manual/1-DDL (Data Definition Language).md similarity index 56% rename from docs/Documentation/UserGuide/4-Operation Manual/2-Data Model Selection.md rename to docs/Documentation/UserGuide/5-Operation Manual/1-DDL (Data Definition Language).md index 72fa87bddd4aa0b5153a85b42e7a9db5d1d6dfa9..85b27ba6e69ccc14a26fec3a26688e74cc5fc9cb 100644 --- a/docs/Documentation/UserGuide/4-Operation Manual/2-Data Model Selection.md +++ b/docs/Documentation/UserGuide/5-Operation Manual/1-DDL (Data Definition Language).md @@ -19,29 +19,11 @@ --> -# Chapter 4: Operation Manual +# Chapter 5: Operation Manual +# DDL (Data Definition Language) -## Sample Data - -To make this manual more practical, we will use a specific scenario example to illustrate how to operate IoTDB databases at all stages of use. See [this page](https://github.com/apache/incubator-iotdb/blob/master/docs/Documentation/OtherMaterial-Sample%20Data.txt) for a look. For your convenience, we also provide you with a sample data file in real scenario to import into the IoTDB system for trial and operation. - -Download file: [IoTDB-SampleData.txt](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt). - -## Data Model Selection - -Before importing data to IoTDB, we first select the appropriate data storage model according to the [sample data](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt), and then create the storage group and timeseries using [SET STORAGE GROUP](/#/Documents/progress/chap4/sec7) statement and [CREATE TIMESERIES](/#/Documents/progress/chap4/sec7) statement respectively. - -### Storage Model Selection -According to the data attribute layers described in [sample data](https://raw.githubusercontent.com/apache/incubator-iotdb/master/docs/Documentation/OtherMaterial-Sample%20Data.txt), we can express it as an attribute hierarchy structure based on the coverage of attributes and the subordinate relationship between them, as shown in Figure 3.1 below. Its hierarchical relationship is: power group layer - power plant layer - device layer - sensor layer. ROOT is the root node, and each node of sensor layer is called a leaf node. In the process of using IoTDB, you can directly connect the attributes on the path from ROOT node to each leaf node with ".", thus forming the name of a timeseries in IoTDB. For example, The left-most path in Figure 3.1 can generate a timeseries named `ROOT.ln.wf01.wt01.status`. - -
- -**Figure 3.1 Attribute hierarchy structure**
- -After getting the name of the timeseries, we need to set up the storage group according to the actual scenario and scale of the data. Because in the scenario of this chapter data is usually arrived in the unit of groups (i.e., data may be across electric fields and devices), in order to avoid frequent switching of IO when writing data, and to meet the user's requirement of physical isolation of data in the unit of groups, we set the storage group at the group layer. - -### Storage Group Creation -After selecting the storage model, according to which we can set up the corresponding storage group. The SQL statements for creating storage groups are as follows: +### Create Storage Group +According to the storage model we can set up the corresponding storage group. The SQL statements for creating storage groups are as follows: ``` IoTDB > set storage group to root.ln @@ -58,7 +40,7 @@ Msg: org.apache.iotdb.exception.MetadataErrorException: org.apache.iotdb.excepti ``` ### Show Storage Group -After the storage group is created, we can use the [SHOW STORAGE GROUP](/#/Documents/progress/chap4/sec7) statement to view all the storage groups. The SQL statement is as follows: +After the storage group is created, we can use the [SHOW STORAGE GROUP](/#/Documents/progress/chap5/sec4) statement to view all the storage groups. The SQL statement is as follows: ``` IoTDB> show storage group @@ -67,7 +49,8 @@ IoTDB> show storage group The result is as follows:
-### Timeseries Creation + +### Create Timeseries According to the storage model selected before, we can create corresponding timeseries in the two storage groups respectively. The SQL statements for creating timeseries are as follows: ``` @@ -107,10 +90,48 @@ The results are shown below respectly: It is worth noting that when the queried path does not exist, the system will return no timeseries. -### Precautions +### Count Timeseries +IoTDB are able to use `COUNT TIMESERIES` to count the amount of timeseries in the path. SQL statements are as follows: +``` +IoTDB > COUNT TIMESERIES root +IoTDB > COUNT TIMESERIES root.ln +IoTDB > COUNT TIMESERIES root.ln.*.*.status +IoTDB > COUNT TIMESERIES root.ln.wf01.wt01.status +``` + +### Delete Timeseries +To delete the timeseries we created before, we are able to use `DELETE TimeSeries ` statement. -Version 0.7.0 imposes some limitations on the scale of data that users can operate: +The usage are as follows: +``` +IoTDB> delete timeseries root.ln.wf01.wt01.status +IoTDB> delete timeseries root.ln.wf01.wt01.temperature, root.ln.wf02.wt02.hardware +IoTDB> delete timeseries root.ln.wf02* +``` + +### Show Devices +IoTDB supports users to check the devices using `SHOW DEVICES` statement. +SQL statement is as follows: +``` +IoTDB> show devices +``` + +## TTL +IoTDB supports storage-level TTL settings, which means it is able to delete old data automatically and periodically. The benefit of using TTL is that hopefully you can control the total disk space usage and prevent the machine from running out of disks. Moreover, the query performance may downgrade as the total number of files goes up and the memory usage also increase as there are more files. Timely removing such files helps to keep at a high query performance level and reduce memory usage. + +### Set TTL +The SQL Statement for setting TTL is as follow: +``` +IoTDB> set ttl to root.ln 3600000 +``` +This example means that for data in `root.ln`, only that of the latest 1 hour will remain, the older one is removed or made invisible. + +### Unset TTL + +To unset TTL, we can use follwing SQL statement: +``` +IoTDB> unset ttl to root.ln +``` +After unset TTL, all data will be accepted in `root.ln` -Limit 1: Assuming that the JVM memory allocated to IoTDB at runtime is p and the user-defined size of data in memory written to disk ([group\_size\_in\_byte](/#/Documents/progress/chap3/sec2)) is q, then the number of storage groups should not exceed p/q. -Limit 2: The number of timeseries should not exceed the ratio of JVM memory allocated to IoTDB at run time to 20KB. diff --git a/docs/Documentation/UserGuide/4-Operation Manual/4-Data Query.md b/docs/Documentation/UserGuide/5-Operation Manual/2-DML (Data Manipulation Language).md similarity index 82% rename from docs/Documentation/UserGuide/4-Operation Manual/4-Data Query.md rename to docs/Documentation/UserGuide/5-Operation Manual/2-DML (Data Manipulation Language).md index 28644f993489fffe73d96d01503ee3501e611496..6ac5d337ac0325c4dc097be185394ba120cd9e53 100644 --- a/docs/Documentation/UserGuide/4-Operation Manual/4-Data Query.md +++ b/docs/Documentation/UserGuide/5-Operation Manual/2-DML (Data Manipulation Language).md @@ -1,4 +1,4 @@ - -# Chapter 4: Operation Manual +# Chapter 5: Operation Manual +# DML (Data Manipulation Language) +## INSERT +### Insert Real-time Data -## Data Query +IoTDB provides users with a variety of ways to insert real-time data, such as directly inputting [INSERT SQL statement](/#/Documents/progress/chap5/sec4) in [Client/Shell tools](/#/Tools/Client), or using [Java JDBC](/#/Documents/progress/chap4/sec2) to perform single or batch execution of [INSERT SQL statement](/#/Documents/progress/chap5/sec4). + +This section mainly introduces the use of [INSERT SQL statement](/#/Documents/progress/chap5/sec4) for real-time data import in the scenario. See Section 5.4 for a detailed syntax of [INSERT SQL statement](/#/Documents/progress/chap5/sec4). + +#### Use of INSERT Statements +The [INSERT SQL statement](/#/Documents/progress/chap5/sec4) statement can be used to insert data into one or more specified timeseries that have been created. For each point of data inserted, it consists of a [timestamp](/#/Documents/progress/chap2/sec1) and a sensor acquisition value (see [Data Type](/#/Documents/progress/chap2/sec2)). + +In the scenario of this section, take two timeseries `root.ln.wf02.wt02.status` and `root.ln.wf02.wt02.hardware` as an example, and their data types are BOOLEAN and TEXT, respectively. + +The sample code for single column data insertion is as follows: +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) +IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1, "v1") +``` + +The above example code inserts the long integer timestamp and the value "true" into the timeseries `root.ln.wf02.wt02.status` and inserts the long integer timestamp and the value "v1" into the timeseries `root.ln.wf02.wt02.hardware`. When the execution is successful, cost time is shown to indicate that the data insertion has been completed. + +> Note: In IoTDB, TEXT type data can be represented by single and double quotation marks. The insertion statement above uses double quotation marks for TEXT type data. The following example will use single quotation marks for TEXT type data. + +The INSERT statement can also support the insertion of multi-column data at the same time point. The sample code of inserting the values of the two timeseries at the same time point '2' is as follows: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp, status, hardware) VALUES (2, false, 'v2') +``` + +After inserting the data, we can simply query the inserted data using the SELECT statement: + +``` +IoTDB > select * from root.ln.wf02 where time < 3 +``` + +The result is shown below. From the query results, it can be seen that the insertion statements of single column and multi column data are performed correctly. + +
+ +### Error Handling of INSERT Statements +If the user inserts data into a non-existent timeseries, for example, execute the following commands: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp, temperature) values(1,"v1") +``` + +Because `root.ln.wf02.wt02. temperature` does not exist, the system will return the following ERROR information: + +``` +Msg: Current deviceId[root.ln.wf02.wt02] does not contain measurement:temperature +``` +If the data type inserted by the user is inconsistent with the corresponding data type of the timeseries, for example, execute the following command: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1,100) +``` +The system will return the following ERROR information: + +``` +error: The TEXT data type should be covered by " or ' +``` + +## SELECT ### Time Slice Query -This chapter mainly introduces the relevant examples of time slice query using IoTDB SELECT statements. Detailed SQL syntax and usage specifications can be found in [SQL Documentation](/#/Documents/progress/chap4/sec7). You can also use the [Java JDBC](/#/Documents/progress/chap6/sec1) standard interface to execute related queries. +This chapter mainly introduces the relevant examples of time slice query using IoTDB SELECT statements. Detailed SQL syntax and usage specifications can be found in [SQL Documentation](/#/Documents/progress/chap5/sec4). You can also use the [Java JDBC](/#/Documents/progress/chap4/sec2) standard interface to execute related queries. #### Select a Column of Data Based on a Time Interval @@ -83,7 +144,7 @@ The execution result of this SQL statement is as follows:
### Down-Frequency Aggregate Query -This section mainly introduces the related examples of down-frequency aggregation query, using the [GROUP BY clause](/#/Documents/progress/chap4/sec7), which is used to partition the result set according to the user's given partitioning conditions and aggregate the partitioned result set. IoTDB supports partitioning result sets according to time intervals, and by default results are sorted by time in ascending order. You can also use the [Java JDBC](/#/Documents/progress/chap6/sec1) standard interface to execute related queries. +This section mainly introduces the related examples of down-frequency aggregation query, using the [GROUP BY clause](/#/Documents/progress/chap5/sec4), which is used to partition the result set according to the user's given partitioning conditions and aggregate the partitioned result set. IoTDB supports partitioning result sets according to time intervals, and by default results are sorted by time in ascending order. You can also use the [Java JDBC](/#/Documents/progress/chap4/sec2) standard interface to execute related queries. The GROUP BY statement provides users with three types of specified parameters: @@ -95,7 +156,7 @@ The actual meanings of the three types of parameters are shown in Figure 3.2 bel
-**Figure 4.2 The actual meanings of the three types of parameters**
+**Figure 5.2 The actual meanings of the three types of parameters**
#### Down-Frequency Aggregate Query without Specifying the Time Axis Origin Position The SQL statement is: @@ -167,7 +228,7 @@ In the actual use of IoTDB, when doing the query operation of timeseries, situat Automated fill function refers to filling empty values according to the user's specified method and effective time range when performing timeseries queries for single or multiple columns. If the queried point's value is not null, the fill function will not work. -> Note: In the current version 0.7.0, IoTDB provides users with two methods: Previous and Linear. The previous method fills blanks with previous value. The linear method fills blanks through linear fitting. And the fill function can only be used when performing point-in-time queries. +> Note: In the current version, IoTDB provides users with two methods: Previous and Linear. The previous method fills blanks with previous value. The linear method fills blanks through linear fitting. And the fill function can only be used when performing point-in-time queries. #### Fill Function * Previous Function @@ -273,9 +334,9 @@ When the fill method is not specified, each data type bears its own default fill ### Row and Column Control over Query Results -IoTDB provides [LIMIT/SLIMIT](/#/Documents/progress/chap5/sec1) clause and [OFFSET/SOFFSET](/#/Documents/progress/chap5/sec1) clause in order to make users have more control over query results. The use of LIMIT and SLIMIT clauses allows users to control the number of rows and columns of query results, and the use of OFFSET and SOFSET clauses allows users to set the starting position of the results for display. +IoTDB provides [LIMIT/SLIMIT](/#/Documents/progress/chap5/sec4) clause and [OFFSET/SOFFSET](/#/Documents/progress/chap5/sec4) clause in order to make users have more control over query results. The use of LIMIT and SLIMIT clauses allows users to control the number of rows and columns of query results, and the use of OFFSET and SOFSET clauses allows users to set the starting position of the results for display. -This chapter mainly introduces related examples of row and column control of query results. You can also use the [Java JDBC](/#/Documents/progress/chap6/sec1) standard interface to execute queries. +This chapter mainly introduces related examples of row and column control of query results. You can also use the [Java JDBC](/#/Documents/progress/chap4/sec2) standard interface to execute queries. #### Row Control over Query Results By using LIMIT and OFFSET clauses, users can control the query results in a row-related manner. We will demonstrate how to use LIMIT and OFFSET clauses through the following examples. @@ -483,3 +544,38 @@ select * from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < ``` The SQL statement will not be executed and the corresponding error prompt is given as follows:
+ + + +## DELETE + +Users can delete data that meet the deletion condition in the specified timeseries by using the [DELETE statement](/#/Documents/progress/chap5/sec4). When deleting data, users can select one or more timeseries paths, prefix paths, or paths with star to delete data before a certain time (current version does not support the deletion of data within a closed time interval). + +In a JAVA programming environment, you can use the [Java JDBC](/#/Documents/progress/chap4/sec2) to execute single or batch UPDATE statements. + +### Delete Single Timeseries +Taking ln Group as an example, there exists such a usage scenario: + +The wf02 plant's wt02 device has many segments of errors in its power supply status before 2017-11-01 16:26:00, and the data cannot be analyzed correctly. The erroneous data affected the correlation analysis with other devices. At this point, the data before this time point needs to be deleted. The SQL statement for this operation is + +``` +delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; +``` + +### Delete Multiple Timeseries +When both the power supply status and hardware version of the ln group wf02 plant wt02 device before 2017-11-01 16:26:00 need to be deleted, [the prefix path with broader meaning or the path with star](/#/Documents/progress/chap2/sec1) can be used to delete the data. The SQL statement for this operation is: + +``` +delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; +``` +or + +``` +delete from root.ln.wf02.wt02.* where time <= 2017-11-01T16:26:00; +``` +It should be noted that when the deleted path does not exist, IoTDB will give the corresponding error prompt as shown below: + +``` +IoTDB> delete from root.ln.wf03.wt02.status where time < now() +Msg: TimeSeries does not exist and its data cannot be deleted +``` diff --git a/docs/Documentation/UserGuide/4-Operation Manual/6-Priviledge Management.md b/docs/Documentation/UserGuide/5-Operation Manual/3-Account Management Statements.md similarity index 96% rename from docs/Documentation/UserGuide/4-Operation Manual/6-Priviledge Management.md rename to docs/Documentation/UserGuide/5-Operation Manual/3-Account Management Statements.md index ca2c169a966437e6ed6fc6902bd07e1dce0453d2..50221c3beb97c9aa9dd9624db97e69e0ec6375dc 100644 --- a/docs/Documentation/UserGuide/4-Operation Manual/6-Priviledge Management.md +++ b/docs/Documentation/UserGuide/5-Operation Manual/3-Account Management Statements.md @@ -19,12 +19,12 @@ --> -# Chapter 4: Operation Manual +# Chapter 5: Operation Manual -## Priviledge Management -IoTDB provides users with priviledge management operations, so as to ensure data security. +## Account Management Statements +IoTDB provides users with account priviledge management operations, so as to ensure data security. -We will show you basic user priviledge management operations through the following specific examples. Detailed SQL syntax and usage details can be found in [SQL Documentation](/#/Documents/progress/chap4/sec7). At the same time, in the JAVA programming environment, you can use the [Java JDBC](/#/Documents/progress/chap6/sec1) to execute priviledge management statements in a single or batch mode. +We will show you basic user priviledge management operations through the following specific examples. Detailed SQL syntax and usage details can be found in [SQL Documentation](/#/Documents/progress/chap5/sec4). At the same time, in the JAVA programming environment, you can use the [Java JDBC](/#/Documents/progress/chap4/sec2) to execute priviledge management statements in a single or batch mode. ### Basic Concepts #### User diff --git a/docs/Documentation/UserGuide/4-Operation Manual/7-IoTDB Query Statement.md b/docs/Documentation/UserGuide/5-Operation Manual/4-SQL Reference.md similarity index 99% rename from docs/Documentation/UserGuide/4-Operation Manual/7-IoTDB Query Statement.md rename to docs/Documentation/UserGuide/5-Operation Manual/4-SQL Reference.md index ee2c57e5707a52ebc4a1493b01cae5a9797d4c1a..e4e361045cad9f75b83e2450caca67bc3857c81f 100644 --- a/docs/Documentation/UserGuide/4-Operation Manual/7-IoTDB Query Statement.md +++ b/docs/Documentation/UserGuide/5-Operation Manual/4-SQL Reference.md @@ -18,8 +18,9 @@ under the License. --> -# Chapter 4 Operation Manual -# IoTDB SQL Statement +# Chapter 5: Operation Manual + +## SQL Reference In this part, we will introduce you IoTDB's Query Language. IoTDB offers you a SQL-like query language for interacting with IoTDB, the query language can be devided into 4 major parts: * Schema Statement: statements about schema management are all listed in this section. @@ -29,8 +30,6 @@ In this part, we will introduce you IoTDB's Query Language. IoTDB offers you a S All of these statements are write in IoTDB's own syntax, for details about the syntax composition, please check the `Reference` section. -## IoTDB Query Statement - ### Schema Statement @@ -138,6 +137,13 @@ Note: The path can be prefix path or timeseries path. Note: This statement can be used in IoTDB Client and JDBC. ``` +* Show Devices Statement +``` +SHOW DEVICE +Eg: IoTDB > SHOW DEVICES +Note: This statement can be used in IoTDB Client and JDBC. +``` + ### Data Management Statement * Insert Record Statement @@ -524,8 +530,8 @@ Eg: IoTDB > LIST ALL USER OF ROLE roleuser; ``` ALTER USER SET PASSWORD ; roleName:=identifier -password:=string -Eg: IoTDB > UPDATE USER tempuser SET PASSWORD 'newpwd'; +password:=identifier +Eg: IoTDB > ALTER USER tempuser SET PASSWORD 'newpwd'; ``` ### Functions diff --git a/docs/Documentation/UserGuide/9-System Tools/1-Sync Tool.md b/docs/Documentation/UserGuide/6-System Tools/1-Sync Tool.md similarity index 99% rename from docs/Documentation/UserGuide/9-System Tools/1-Sync Tool.md rename to docs/Documentation/UserGuide/6-System Tools/1-Sync Tool.md index c04129d0209e93d7f574ae05356c5cfadf3fd756..8e997534f31c40790d6b2b317290cc476af9399f 100644 --- a/docs/Documentation/UserGuide/9-System Tools/1-Sync Tool.md +++ b/docs/Documentation/UserGuide/6-System Tools/1-Sync Tool.md @@ -19,13 +19,13 @@ --> -# Chapter 9: System Tools +# Chapter 6: System Tools ## Data Import -- [Chapter 8: System Tools](#chapter-8-system-tools) +- [Chapter 6: System Tools](#chapter-8-system-tools) - [Data Import](#data-import) - [Introduction](#introduction) - [Application Scenario](#application-scenario) diff --git a/docs/Documentation/UserGuide/9-System Tools/2-Memory Estimation Tool.md b/docs/Documentation/UserGuide/6-System Tools/2-Memory Estimation Tool.md similarity index 99% rename from docs/Documentation/UserGuide/9-System Tools/2-Memory Estimation Tool.md rename to docs/Documentation/UserGuide/6-System Tools/2-Memory Estimation Tool.md index 00397ca5781c0aa58d04c6b1ed65303dfb0acbb6..54238ef25703955465a87e11faf51c04d6b51732 100644 --- a/docs/Documentation/UserGuide/9-System Tools/2-Memory Estimation Tool.md +++ b/docs/Documentation/UserGuide/6-System Tools/2-Memory Estimation Tool.md @@ -19,7 +19,7 @@ --> -# Chapter 9: System Tools +# Chapter 6: System Tools ## Memory Estimation Tool diff --git a/docs/Documentation/UserGuide/9-System Tools/3-JMX Tool.md b/docs/Documentation/UserGuide/6-System Tools/3-JMX Tool.md similarity index 98% rename from docs/Documentation/UserGuide/9-System Tools/3-JMX Tool.md rename to docs/Documentation/UserGuide/6-System Tools/3-JMX Tool.md index 79133916faf52c8b62ad79a243108499e2bdb4c5..3292d21e46f45a1587e9082bdc8dea2718cf2500 100644 --- a/docs/Documentation/UserGuide/9-System Tools/3-JMX Tool.md +++ b/docs/Documentation/UserGuide/6-System Tools/3-JMX Tool.md @@ -19,7 +19,7 @@ --> -# Chapter 9: System Tools +# Chapter 6: System Tools # JMX Tool diff --git a/docs/Documentation/UserGuide/9-System Tools/4-Watermark Tool.md b/docs/Documentation/UserGuide/6-System Tools/4-Watermark Tool.md similarity index 99% rename from docs/Documentation/UserGuide/9-System Tools/4-Watermark Tool.md rename to docs/Documentation/UserGuide/6-System Tools/4-Watermark Tool.md index 72f320d4b242db4a3c7e9af384388535a2d6b167..6deda9ce212fffe587b0577f2eb4f002a05da4ed 100644 --- a/docs/Documentation/UserGuide/9-System Tools/4-Watermark Tool.md +++ b/docs/Documentation/UserGuide/6-System Tools/4-Watermark Tool.md @@ -21,7 +21,7 @@ under the License. --> -# Chapter 9: System Tools +# Chapter 6: System Tools # Watermark Tool diff --git a/docs/Documentation/UserGuide/9-System Tools/5-Log Visualizer.md b/docs/Documentation/UserGuide/6-System Tools/5-Log Visualizer.md similarity index 99% rename from docs/Documentation/UserGuide/9-System Tools/5-Log Visualizer.md rename to docs/Documentation/UserGuide/6-System Tools/5-Log Visualizer.md index 7f02dfba42e22f0ef705b6ff67995f7319536f57..6730a649b5c4db36018070292c51850cc6c0a292 100644 --- a/docs/Documentation/UserGuide/9-System Tools/5-Log Visualizer.md +++ b/docs/Documentation/UserGuide/6-System Tools/5-Log Visualizer.md @@ -19,7 +19,7 @@ --> -# Chapter 9: System Tools +# Chapter 6: System Tools ## LogVisualizer diff --git a/docs/Documentation/UserGuide/9-System Tools/6-Query History Visualization Tool.md b/docs/Documentation/UserGuide/6-System Tools/6-Query History Visualization Tool.md similarity index 98% rename from docs/Documentation/UserGuide/9-System Tools/6-Query History Visualization Tool.md rename to docs/Documentation/UserGuide/6-System Tools/6-Query History Visualization Tool.md index e795950dd91d6f8357da3daed75e7ecdc2758b81..42dbfd4fc1530a222bf85e2a4efbe8b5668fef2a 100644 --- a/docs/Documentation/UserGuide/9-System Tools/6-Query History Visualization Tool.md +++ b/docs/Documentation/UserGuide/6-System Tools/6-Query History Visualization Tool.md @@ -19,7 +19,7 @@ --> -# Chapter 9: System Tools +# Chapter 6: System Tools ## Query History Visualization Tool diff --git a/docs/Documentation/UserGuide/5-Management/1-System Monitor.md b/docs/Documentation/UserGuide/6-System Tools/7-Monitor and Log Tools.md similarity index 64% rename from docs/Documentation/UserGuide/5-Management/1-System Monitor.md rename to docs/Documentation/UserGuide/6-System Tools/7-Monitor and Log Tools.md index db6bfab08ccdc6c76a3b45edec2eb10096eee9a7..11cb4867db9dceaae914be80396c78c357516636 100644 --- a/docs/Documentation/UserGuide/5-Management/1-System Monitor.md +++ b/docs/Documentation/UserGuide/6-System Tools/7-Monitor and Log Tools.md @@ -19,8 +19,8 @@ --> -# Chapter 5: Management - +# Chapter 6: System Tools +# Monitor and Log Tools ## System Monitor Currently, IoTDB provides users to use Java's JConsole tool to monitor system status or use IoTDB's open API to check data status. @@ -357,3 +357,115 @@ Here are the file size statistics: |Timeseries Name| root.stats.file\_size.SCHEMA | |Reset After Restarting System| No | |Example| select SCHEMA from root.stats.file\_size.SCHEMA| + +## Performance Monitor + +### Introduction + +In order to grasp the performance of iotdb, we add this module to count the time-consumption of each operation. This module can compute the statistics of the avg time-consuming of each operation and the proportion of each operation whose time consumption falls into a time range. The output is in log_measure.log file. An output example is below.   + + + +### Configuration parameter + +location:conf/iotdb-engine.properties + +
**Table -parameter and description** + +|Parameter|Default Value|Description| +|:---|:---|:---| +|enable\_performance\_stat|false|Is stat performance of sub-module enable.| +|performance\_stat\_display\_interval|60000|The interval of display statistic result in ms.| +|performance_stat_memory_in_kb|20|The memory used for performance_stat in kb.| +
+ +### JMX MBean + +Connect to jconsole with port 31999,and choose ‘MBean’in menu bar. Expand the sidebar and choose 'org.apache.iotdb.db.cost.statistic'. You can Find: + + + +**Attribute** + +1. EnableStat:Whether the statistics are enabled or not, if it is true, the module records the time-consuming of each operation and prints the results; It is non-editable but can be changed by the function below. + +2. DisplayIntervalInMs:The interval between print results. The changes will not take effect instantly. To make the changes effective, you should call startContinuousStatistics() or startOneTimeStatistics(). +3. OperationSwitch:It's a map to indicate whether the statistics of one kind of operation should be computed, the key is operation name and the value is true means the statistics of the operation are enabled, otherwise disabled. This parameter cannot be changed directly, it's changed by operation 'changeOperationSwitch()'. + +**Operation** + +1. startContinuousStatistics: Start the statistics and output at interval of ‘DisplayIntervalInMs’. +2. startOneTimeStatistics:Start the statistics and output in delay of ‘DisplayIntervalInMs’. +3. stopStatistic:Stop the statistics. +4. clearStatisticalState(): clear current stat result, reset statistical result. +5. changeOperationSwitch(String operationName, Boolean operationState):set whether to monitor a kind of operation. The param 'operationName' is the name of operation, defined in attribute operationSwitch. The param operationState is whether to enable the statistics or not. If the state is switched successfully, the function will return true, else return false. + +### Adding Custom Monitoring Items for contributors of IOTDB + +**Add Operation** + +Add an enumeration in org.apache.iotdb.db.cost.statistic.Operation. + +**Add Timing Code in Monitoring Area** + +Add timing code in the monitoring start area: + + long t0 = System. currentTimeMillis(); + +Add timing code in the monitoring stop area: + + Measurement.INSTANCE.addOperationLatency(Operation, t0); + +## Cache Hit Ratio Statistics + +### Overview + +To improve query performance, IOTDB caches ChunkMetaData and TsFileMetaData. Users can view the cache hit ratio through debug level log and MXBean, and adjust the memory occupied by the cache according to the cache hit ratio and system memory. The method of using MXBean to view cache hit ratio is as follows: +1. Connect to jconsole with port 31999 and select 'MBean' in the menu item above. +2. Expand the sidebar and select 'org.apache.iotdb.db.service'. You will get the results shown in the following figure: + + +## System log + +IoTDB allows users to configure IoTDB system logs (such as log output level) by modifying the log configuration file. The default location of the system log configuration file is in \$IOTDB_HOME/conf folder. + +The default log configuration file is named logback.xml. The user can modify the configuration of the system running log by adding or changing the xml tree node parameters. It should be noted that the configuration of the system log using the log configuration file does not take effect immediately after the modification, instead, it will take effect after restarting the system. The usage of logback.xml is just as usual. + +At the same time, in order to facilitate the debugging of the system by the developers and DBAs, we provide several JMX interfaces to dynamically modify the log configuration, and configure the Log module of the system in real time without restarting the system. + +### Dynamic System Log Configuration + +#### Connect JMX + +Here we use JConsole to connect with JMX. + +Start the JConsole, establish a new JMX connection with the IoTDB Server (you can select the local process or input the IP and PORT for remote connection, the default operation port of the IoTDB JMX service is 31999). Fig 4.1 shows the connection GUI of JConsole. + + + +After connected, click `MBean` and find `ch.qos.logback.classic.default.ch.qos.logback.classic.jmx.JMXConfigurator`(As shown in fig 4.2). + + +In the JMXConfigurator Window, there are 6 operations provided for you, as shown in fig 4.3. You can use there interfaces to perform operation. + + + +#### Interface Instruction + +* reloadDefaultConfiguration + +This method is to reload the default logback configuration file. The user can modify the default configuration file first, and then call this method to reload the modified configuration file into the system to take effect. + +* reloadByFileName + +This method loads a logback configuration file with the specified path and name, and then makes it take effect. This method accepts a parameter of type String named p1, which is the path to the configuration file that needs to be specified for loading. + +* getLoggerEffectiveLevel + +This method is to obtain the current log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level currently in effect for the specified Logger. + +* getLoggerLevel + +This method is to obtain the log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level of the specified Logger. +It should be noted that the difference between this method and the `getLoggerEffectiveLevel` method is that the method returns the log level that the specified Logger is set in the configuration file. If the user does not set the log level for the Logger. , then return empty. According to Logger's log-level inheritance mechanism, f a Logger's level is not explicitly set, it will inherit the log level settings from its nearest ancestor. At this point, calling the `getLoggerEffectiveLevel` method will return the log level in which the Logger is in effect; calling `getLoggerLevel` will return null. + diff --git a/docs/Documentation/UserGuide/10-Ecosystem Integration/1-Grafana.md b/docs/Documentation/UserGuide/7-Ecosystem Integration/1-Grafana.md similarity index 98% rename from docs/Documentation/UserGuide/10-Ecosystem Integration/1-Grafana.md rename to docs/Documentation/UserGuide/7-Ecosystem Integration/1-Grafana.md index 973bf56577abb349ceed871eecb841169b95e139..b463ac8c21a3f5bf3b5c19b41becb8ecd1dbddaf 100644 --- a/docs/Documentation/UserGuide/10-Ecosystem Integration/1-Grafana.md +++ b/docs/Documentation/UserGuide/7-Ecosystem Integration/1-Grafana.md @@ -18,7 +18,7 @@ under the License. --> -# Chapter 10: Ecosystem Integration +# Chapter 7: Ecosystem Integration # Grafana ## Outline diff --git a/docs/Documentation/UserGuide/10-Ecosystem Integration/2-TsFile Hadoop Connector.md b/docs/Documentation/UserGuide/7-Ecosystem Integration/2-TsFile Hadoop Connector.md similarity index 99% rename from docs/Documentation/UserGuide/10-Ecosystem Integration/2-TsFile Hadoop Connector.md rename to docs/Documentation/UserGuide/7-Ecosystem Integration/2-TsFile Hadoop Connector.md index dfbf321c83c9b41f8413d16a3da0715ecd8160b2..9a6834bb0f3cd1521b415089e1b9a7b4c45865de 100644 --- a/docs/Documentation/UserGuide/10-Ecosystem Integration/2-TsFile Hadoop Connector.md +++ b/docs/Documentation/UserGuide/7-Ecosystem Integration/2-TsFile Hadoop Connector.md @@ -18,7 +18,7 @@ under the License. --> -# Chapter 10: Ecosystem Integration +# Chapter 7: Ecosystem Integration # TsFile-Hadoop-Connector ## Outline diff --git a/docs/Documentation/UserGuide/10-Ecosystem Integration/3-TsFile Spark Connector.md b/docs/Documentation/UserGuide/7-Ecosystem Integration/3-TsFile Spark Connector.md similarity index 99% rename from docs/Documentation/UserGuide/10-Ecosystem Integration/3-TsFile Spark Connector.md rename to docs/Documentation/UserGuide/7-Ecosystem Integration/3-TsFile Spark Connector.md index 569b2cc44dab0f805731dd5758c365f1596ecb38..261cf09fce75cfb604210045050e70b16314bb79 100644 --- a/docs/Documentation/UserGuide/10-Ecosystem Integration/3-TsFile Spark Connector.md +++ b/docs/Documentation/UserGuide/7-Ecosystem Integration/3-TsFile Spark Connector.md @@ -19,7 +19,7 @@ --> -# Chapter 10: Ecosystem Integration +# Chapter 7: Ecosystem Integration # TsFile-Spark-Connector User Guide diff --git a/docs/Documentation/UserGuide/10-Ecosystem Integration/4-Spark IoTDB Connector.md b/docs/Documentation/UserGuide/7-Ecosystem Integration/4-Spark IoTDB Connector.md similarity index 99% rename from docs/Documentation/UserGuide/10-Ecosystem Integration/4-Spark IoTDB Connector.md rename to docs/Documentation/UserGuide/7-Ecosystem Integration/4-Spark IoTDB Connector.md index 06059f795748e07531b031108c384136faef439c..4d7aa1903abdcc4f1a6566ba2e6a606496d096d9 100644 --- a/docs/Documentation/UserGuide/10-Ecosystem Integration/4-Spark IoTDB Connector.md +++ b/docs/Documentation/UserGuide/7-Ecosystem Integration/4-Spark IoTDB Connector.md @@ -18,7 +18,7 @@ under the License. --> -# Chapter 10: Ecosystem Integration +# Chapter 7: Ecosystem Integration # Spark IoTDB Connecter ## version diff --git a/docs/Documentation/UserGuide/10-Ecosystem Integration/5-TsFile Hive Connector.md b/docs/Documentation/UserGuide/7-Ecosystem Integration/5-TsFile Hive Connector.md similarity index 99% rename from docs/Documentation/UserGuide/10-Ecosystem Integration/5-TsFile Hive Connector.md rename to docs/Documentation/UserGuide/7-Ecosystem Integration/5-TsFile Hive Connector.md index 3f5af495f0c0d2d40cd52d08504e06a77b2cf3c5..97e5ebdbf82d2af8e9da1765ce66c7261bfe3508 100644 --- a/docs/Documentation/UserGuide/10-Ecosystem Integration/5-TsFile Hive Connector.md +++ b/docs/Documentation/UserGuide/7-Ecosystem Integration/5-TsFile Hive Connector.md @@ -18,7 +18,7 @@ under the License. --> - +# Chapter 7: Ecosystem Integration ## Outline diff --git a/docs/Documentation/UserGuide/7-System Design/1-Hierarchy.md b/docs/Documentation/UserGuide/8-System Design (Developer)/1-Hierarchy.md similarity index 99% rename from docs/Documentation/UserGuide/7-System Design/1-Hierarchy.md rename to docs/Documentation/UserGuide/8-System Design (Developer)/1-Hierarchy.md index e176157ce527f11a4285ec6894def0537c55c038..c8e85abf012c2328b430e900ddc88e20738937ae 100644 --- a/docs/Documentation/UserGuide/7-System Design/1-Hierarchy.md +++ b/docs/Documentation/UserGuide/8-System Design (Developer)/1-Hierarchy.md @@ -19,7 +19,7 @@ --> -# Chapter 7: System Design +# Chapter 8: System Design (Developer) ## TsFile Hierarchy diff --git a/docs/Documentation/UserGuide/5-Management/4-Data Management.md b/docs/Documentation/UserGuide/8-System Design (Developer)/2-Files.md similarity index 97% rename from docs/Documentation/UserGuide/5-Management/4-Data Management.md rename to docs/Documentation/UserGuide/8-System Design (Developer)/2-Files.md index 7a3d6e15256fb196d98f49e9b50291a88da4394e..75f3162f5ba56214136313d5421449050d811128 100644 --- a/docs/Documentation/UserGuide/5-Management/4-Data Management.md +++ b/docs/Documentation/UserGuide/8-System Design (Developer)/2-Files.md @@ -19,10 +19,9 @@ --> -# Chapter 5: Management +# Chapter 8: System Design (Developer) - -## Data Management +## Files In IoTDB, there are many kinds of data needed to be storage. In this section, we will introduce IoTDB's data storage strategy in order to give you an intuitive understanding of IoTDB's data management. diff --git a/docs/Documentation/UserGuide/8-Distributed Architecture/1-Shared Storage Architecture.md b/docs/Documentation/UserGuide/8-System Design (Developer)/3-Writing Data on HDFS.md similarity index 98% rename from docs/Documentation/UserGuide/8-Distributed Architecture/1-Shared Storage Architecture.md rename to docs/Documentation/UserGuide/8-System Design (Developer)/3-Writing Data on HDFS.md index c2ffcc773405ace3cd114854966f98dad7913a7c..cfdb8e710db2b5bb8b8ac13948e5a1faa35c2bbe 100644 --- a/docs/Documentation/UserGuide/8-Distributed Architecture/1-Shared Storage Architecture.md +++ b/docs/Documentation/UserGuide/8-System Design (Developer)/3-Writing Data on HDFS.md @@ -19,7 +19,9 @@ --> -# Chapter 8: Distributed Architecture +# Chapter 8: System Design (Developer) + +# Writing Data on HDFS ## Shared Storage Architecture diff --git a/docs/Documentation/UserGuide/8-Distributed Architecture/2-Shared Nothing Architecture.md b/docs/Documentation/UserGuide/8-System Design (Developer)/4-Shard Nothing Cluster.md similarity index 94% rename from docs/Documentation/UserGuide/8-Distributed Architecture/2-Shared Nothing Architecture.md rename to docs/Documentation/UserGuide/8-System Design (Developer)/4-Shard Nothing Cluster.md index 4ebcb2c6562ea7a749922cecb7cf309e1124a7e6..c93793c852ce1ec7ea11e536938b49c1eb8c659b 100644 --- a/docs/Documentation/UserGuide/8-Distributed Architecture/2-Shared Nothing Architecture.md +++ b/docs/Documentation/UserGuide/8-System Design (Developer)/4-Shard Nothing Cluster.md @@ -19,7 +19,7 @@ --> -# Chapter 8: Distributed Architecture +# Chapter 8: System Design (Developer) ## Shared Nothing Architecture