From a62d32922e1fecaa1b9c44ca4425c599c8e3c764 Mon Sep 17 00:00:00 2001 From: 151250128 <151250128@smail.nju.edu.cn> Date: Fri, 5 Jul 2019 16:59:14 +0800 Subject: [PATCH] Separate documents into different chapter folders --- docs/Development.md | 12 + .../UserGuideV0.7.0/0-Content.md | 50 + .../1-Overview/1-What is IoTDB.md | 26 + .../1-Overview/2-Architecture.md | 36 + .../3-Scenario.md} | 42 - .../UserGuideV0.7.0/1-Overview/4-Features.md | 33 + .../1-Key Concepts and Terminology.md} | 71 -- .../2-Data Type.md | 38 + .../3-Encoding.md | 62 + .../4-Compression.md | 28 + .../3-Operation Manual/1-Sample Data.md | 28 + .../2-Data Model Selection.md | 115 ++ .../3-Operation Manual/3-Data Import.md | 87 ++ .../4-Data Query.md} | 383 ------- .../3-Operation Manual/5-Data Maintenance.md | 82 ++ .../6-Priviledge Management.md | 124 ++ .../4-Deployment and Management.md | 1019 ----------------- .../1-Deployment.md | 169 +++ .../2-Configuration.md | 329 ++++++ .../3-System Monitor.md | 359 ++++++ .../4-System log.md | 66 ++ .../5-Data Management.md | 77 ++ .../6-Build and use IoTDB by Dockerfile.md | 92 ++ .../1-IoTDB Query Statement.md} | 129 --- .../5-IoTDB SQL Documentation/2-Reference.md | 137 +++ .../1-JDBC API.md} | 5 - 26 files changed, 1950 insertions(+), 1649 deletions(-) create mode 100644 docs/Documentation/UserGuideV0.7.0/0-Content.md create mode 100644 docs/Documentation/UserGuideV0.7.0/1-Overview/1-What is IoTDB.md create mode 100644 docs/Documentation/UserGuideV0.7.0/1-Overview/2-Architecture.md rename docs/Documentation/UserGuideV0.7.0/{1-Overview.md => 1-Overview/3-Scenario.md} (66%) create mode 100644 docs/Documentation/UserGuideV0.7.0/1-Overview/4-Features.md rename docs/Documentation/UserGuideV0.7.0/{2-Concept.md => 2-Concept Key Concepts and Terminology/1-Key Concepts and Terminology.md} (62%) create mode 100644 docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/2-Data Type.md create mode 100644 docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/3-Encoding.md create mode 100644 docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/4-Compression.md create mode 100644 docs/Documentation/UserGuideV0.7.0/3-Operation Manual/1-Sample Data.md create mode 100644 docs/Documentation/UserGuideV0.7.0/3-Operation Manual/2-Data Model Selection.md create mode 100644 docs/Documentation/UserGuideV0.7.0/3-Operation Manual/3-Data Import.md rename docs/Documentation/UserGuideV0.7.0/{3-Operation Manual.md => 3-Operation Manual/4-Data Query.md} (55%) create mode 100644 docs/Documentation/UserGuideV0.7.0/3-Operation Manual/5-Data Maintenance.md create mode 100644 docs/Documentation/UserGuideV0.7.0/3-Operation Manual/6-Priviledge Management.md delete mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management.md create mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/1-Deployment.md create mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/2-Configuration.md create mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/3-System Monitor.md create mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/4-System log.md create mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/5-Data Management.md create mode 100644 docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/6-Build and use IoTDB by Dockerfile.md rename docs/Documentation/UserGuideV0.7.0/{5-SQL Documentation.md => 5-IoTDB SQL Documentation/1-IoTDB Query Statement.md} (85%) create mode 100644 docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/2-Reference.md rename docs/Documentation/UserGuideV0.7.0/{6-JDBC Documentation.md => 6-JDBC API/1-JDBC API.md} (90%) diff --git a/docs/Development.md b/docs/Development.md index 2768489304..c222d44c29 100644 --- a/docs/Development.md +++ b/docs/Development.md @@ -83,6 +83,18 @@ Changes to IoTDB source codes are made through Github pull request. Anyone can r To propose a change to release documentation (that is, docs that appear under ), edit the Markdown source files in Iotdb’s docs/ directory(`documentation-EN` branch). The process to propose a doc change is otherwise the same as the process for proposing code changes below. +Whenever updating **User Guide** documents, remember to update "0-Content.md" in the same time. Here are two brief examples to show how to add new documents or how to modify existing documents: + +1. Suppose we have "chapter 1:Overview" already, and want to add a new document "A.md" in chapter 1. +Then, + * Step 1: add document named "5-A.md" in folder "1-Overview", since it is the fifth section in this chapter; + * Step 2: modify "0-Content.md" file by adding "* 5-A.md" in the list of "# Chapter 1: Overview". + +2. Suppose we want to create a new chapter "chapter7: RoadMap", and want to add a new document "B.md" in chapter 7. +Then, + * Step 1: create a new folder named "7-RoadMap", and add document named "1-B.md" in folder "7-RoadMap"; + * Step 2: modify "0-Content.md" file by adding "# Chapter 7: RoadMap" in the end, and adding "* 1-B.md" in the list of this new chapter. + ### Contributing Bug Reports If you encounter a problem, try to search the mailing list and JIRA to check whether other people have faced the same situation. If it is not reported before, please report an issue. diff --git a/docs/Documentation/UserGuideV0.7.0/0-Content.md b/docs/Documentation/UserGuideV0.7.0/0-Content.md new file mode 100644 index 0000000000..a741e154de --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/0-Content.md @@ -0,0 +1,50 @@ + + +# Chapter 1: Overview +* 1-What is IoTDB +* 2-Architecture +* 3-Scenario +* 4-Features +# Chapter 2: Concept Key Concepts and Terminology +* 1-Key Concepts and Terminology +* 2-Data Type +* 3-Encoding +* 4-Compression +# Chapter 3: Operation Manual +* 1-Sample Data +* 2-Data Model Selection +* 3-Data Import +* 4-Data Query +* 5-Data Maintenance +* 6-Priviledge Management +# Chapter 4: Deployment and Management +* 1-Deployment +* 2-Configuration +* 3-System Monitor +* 4-System log +* 5-Data Management +* 6-Build and use IoTDB by Dockerfile +# Chapter 5: IoTDB SQL Documentation +* 1-IoTDB Query Statement +* 2-Reference +# Chapter 6: JDBC API +* 1-JDBC API \ No newline at end of file diff --git a/docs/Documentation/UserGuideV0.7.0/1-Overview/1-What is IoTDB.md b/docs/Documentation/UserGuideV0.7.0/1-Overview/1-What is IoTDB.md new file mode 100644 index 0000000000..1dffb353dd --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/1-Overview/1-What is IoTDB.md @@ -0,0 +1,26 @@ + + +# Chapter 1: Overview + +## What is IoTDB + +IoTDB(Internet of Things Database) is an integrated data management engine designed for timeseries data, which can provide users specific services for data collection, storage and analysis. Due to its light weight structure, high performance and usable features together with its intense integration with Hadoop and Spark ecology, IoTDB meets the requirements of massive dataset storage, high-speed data input and complex data analysis in the IoT industrial field. diff --git a/docs/Documentation/UserGuideV0.7.0/1-Overview/2-Architecture.md b/docs/Documentation/UserGuideV0.7.0/1-Overview/2-Architecture.md new file mode 100644 index 0000000000..d65a722a25 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/1-Overview/2-Architecture.md @@ -0,0 +1,36 @@ + + +# Chapter 1: Overview + +## Architecture + +Besides IoTDB engine, we also developed several components to provide better IoT service. All components are referred to below as the IoTDB suite, and IoTDB refers specifically to the IoTDB engine. + +IoTDB suite can provide a series of functions in the real situation such as data collection, data writing, data storage, data query, data visualization and data analysis. Figure 1.1 shows the overall application architecture brought by all the components of the IoTDB suite. + + + +As shown in Figure 1.1, users can use JDBC to import timeseries data collected by sensor on the device to local/remote IoTDB. These timeseries data may be system state data (such as server load and CPU memory, etc.), message queue data, timeseries data from applications, or other timeseries data in the database. Users can also write the data directly to the TsFile (local or on HDFS). + +For the data written to IoTDB and local TsFile, users can use TsFileSync tool to synchronize the TsFile to the HDFS, thereby implementing data processing tasks such as abnormality detection and machine learning on the Hadoop or Spark data processing platform. The results of the analysis can be write back to TsFile in the same way. + +Also, IoTDB and TsFile provide client tools to meet the various needs of users in writing and viewing data in SQL form, script form and graphical form. diff --git a/docs/Documentation/UserGuideV0.7.0/1-Overview.md b/docs/Documentation/UserGuideV0.7.0/1-Overview/3-Scenario.md similarity index 66% rename from docs/Documentation/UserGuideV0.7.0/1-Overview.md rename to docs/Documentation/UserGuideV0.7.0/1-Overview/3-Scenario.md index e4d0358653..879c5cbbce 100644 --- a/docs/Documentation/UserGuideV0.7.0/1-Overview.md +++ b/docs/Documentation/UserGuideV0.7.0/1-Overview/3-Scenario.md @@ -19,39 +19,8 @@ --> - - -- [Chapter 1: Overview](#chapter-1-overview) - - [What is IoTDB](#what-is-iotdb) - - [Architecture](#architecture) - - [Scenario](#scenario) - - [Scenario 1](#scenario-1) - - [Scenario 2](#scenario-2) - - [Scenario 3](#scenario-3) - - [Scenario 4](#scenario-4) - - [Features](#features) - - # Chapter 1: Overview -## What is IoTDB - -IoTDB(Internet of Things Database) is an integrated data management engine designed for timeseries data, which can provide users specific services for data collection, storage and analysis. Due to its light weight structure, high performance and usable features together with its intense integration with Hadoop and Spark ecology, IoTDB meets the requirements of massive dataset storage, high-speed data input and complex data analysis in the IoT industrial field. - -## Architecture - -Besides IoTDB engine, we also developed several components to provide better IoT service. All components are referred to below as the IoTDB suite, and IoTDB refers specifically to the IoTDB engine. - -IoTDB suite can provide a series of functions in the real situation such as data collection, data writing, data storage, data query, data visualization and data analysis. Figure 1.1 shows the overall application architecture brought by all the components of the IoTDB suite. - - - -As shown in Figure 1.1, users can use JDBC to import timeseries data collected by sensor on the device to local/remote IoTDB. These timeseries data may be system state data (such as server load and CPU memory, etc.), message queue data, timeseries data from applications, or other timeseries data in the database. Users can also write the data directly to the TsFile (local or on HDFS). - -For the data written to IoTDB and local TsFile, users can use TsFileSync tool to synchronize the TsFile to the HDFS, thereby implementing data processing tasks such as abnormality detection and machine learning on the Hadoop or Spark data processing platform. The results of the analysis can be write back to TsFile in the same way. - -Also, IoTDB and TsFile provide client tools to meet the various needs of users in writing and viewing data in SQL form, script form and graphical form. - ## Scenario ### Scenario 1 @@ -107,14 +76,3 @@ At this point, IoTDB, IoTDB-CLI, and Hadoop/Spark integration components in the In addition, Hadoop/Spark clusters need to be deployed for data storage and analysis on the data center side. As shown in Figure 1.8. - -## Features - - -* Flexible deployment. IoTDB provides users one-click installation tool on the cloud, once-decompressed-used terminal tool and the bridge tool between cloud platform and terminal tool (Data Synchronization Tool). -* Low cost on hardware. IoTDB can reach a high compression ratio of disk storage (For one billion data storage, hard drive cost less than $0.23) -* Efficient directory structure. IoTDB supports efficient oganization for complex timeseries data structure from intelligent networking devices, oganization for timeseries data from devices of the same type, fuzzy searching strategy for massive and complex directory of timeseries data. -* High-throughput read and write. IoTDB supports millions of low-power devices' strong connection data access, high-speed data read and write for intelligent networking devices and mixed devices mentioned above. -* Rich query semantics. IoTDB supports time alignment for timeseries data accross devices and sensors, computation in timeseries field (frequency domain transformation) and rich aggregation function support in time dimension. -* Easy to get start. IoTDB supports SQL-Like language, JDBC standard API and import/export tools which is easy to use. -* Intense integration with Open Source Ecosystem. IoTDB supports Hadoop, Spark, etc. analysis ecosystems and Grafana visualization tool. diff --git a/docs/Documentation/UserGuideV0.7.0/1-Overview/4-Features.md b/docs/Documentation/UserGuideV0.7.0/1-Overview/4-Features.md new file mode 100644 index 0000000000..5544e79500 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/1-Overview/4-Features.md @@ -0,0 +1,33 @@ + + +# Chapter 1: Overview + +## Features + + +* Flexible deployment. IoTDB provides users one-click installation tool on the cloud, once-decompressed-used terminal tool and the bridge tool between cloud platform and terminal tool (Data Synchronization Tool). +* Low cost on hardware. IoTDB can reach a high compression ratio of disk storage (For one billion data storage, hard drive cost less than $0.23) +* Efficient directory structure. IoTDB supports efficient oganization for complex timeseries data structure from intelligent networking devices, oganization for timeseries data from devices of the same type, fuzzy searching strategy for massive and complex directory of timeseries data. +* High-throughput read and write. IoTDB supports millions of low-power devices' strong connection data access, high-speed data read and write for intelligent networking devices and mixed devices mentioned above. +* Rich query semantics. IoTDB supports time alignment for timeseries data accross devices and sensors, computation in timeseries field (frequency domain transformation) and rich aggregation function support in time dimension. +* Easy to get start. IoTDB supports SQL-Like language, JDBC standard API and import/export tools which is easy to use. +* Intense integration with Open Source Ecosystem. IoTDB supports Hadoop, Spark, etc. analysis ecosystems and Grafana visualization tool. diff --git a/docs/Documentation/UserGuideV0.7.0/2-Concept.md b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/1-Key Concepts and Terminology.md similarity index 62% rename from docs/Documentation/UserGuideV0.7.0/2-Concept.md rename to docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/1-Key Concepts and Terminology.md index a3732fcaec..15688f01b9 100644 --- a/docs/Documentation/UserGuideV0.7.0/2-Concept.md +++ b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/1-Key Concepts and Terminology.md @@ -19,15 +19,6 @@ --> - - -- [Chapter 2: Concept](#chapter-2-concept) - - [Key Concepts and Terminology](#key-concepts-and-terminology) - - [Data Type](#data-type) - - [Encoding](#encoding) - - [Compression](#compression) - - # Chapter 2: Concept ## Key Concepts and Terminology @@ -187,65 +178,3 @@ A data point is made up of a timestamp value pair (timestamp, value). * Column A column of data contains all values belonging to a time series and the timestamps corresponding to these values. When there are multiple columns of data, IoTDB merges the timestamps into multiple < timestamp-value > pairs (timestamp, value, value,...). - -## Data Type -IoTDB supports six data types in total: BOOLEAN (Boolean), INT32 (Integer), INT64 (Long Integer), FLOAT (Single Precision Floating Point), DOUBLE (Double Precision Floating Point), TEXT (String). - - -The time series of FLOAT and DOUBLE type can specify (MAX\_POINT\_NUMBER, see [this page](#iotdb-query-statement) for more information on how to specify), which is the number of digits after the decimal point of the floating point number, if the encoding method is [RLE](#encoding) or [TS\_2DIFF](#encoding) (Refer to [Create Timeseries Statement](#chapter-5-iotdb-sql-documentation) for more information on how to specify). If MAX\_POINT\_NUMBER is not specified, the system will use [float\_precision](#encoding) in the configuration file "tsfile-format.properties" for configuration for the configuration method. - -* For Float data value, The data range is (-Integer.MAX_VALUE, Integer.MAX_VALUE), rather than Float.MAX_VALUE, and the max_point_number is 19, it is because of the limition of function Math.round(float) in Java. -* For Double data value, The data range is (-Long.MAX_VALUE, Long.MAX_VALUE), rather than Double.MAX_VALUE, and the max_point_number is 19, it is because of the limition of function Math.round(double) in Java (Long.MAX_VALUE=9.22E18). - -When the data type of data input by the user in the system does not correspond to the data type of the time series, the system will report type errors. As shown below, the second-order difference encoding does not support the Boolean type: - -``` -IoTDB> create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF -error: encoding TS_2DIFF does not support BOOLEAN -``` - -## Encoding -In order to improve the efficiency of data storage, it is necessary to encode data during data writing, thereby reducing the amount of disk space used. In the process of writing and reading data, the amount of data involved in the I/O operations can be reduced to improve performance. IoTDB supports four encoding methods for different types of data: - -* PLAIN - -PLAIN encoding, the default encoding mode, i.e, no encoding, supports multiple data types. It has high compression and decompression efficiency while suffering from low space storage efficiency. - -* TS_2DIFF - -Second-order differential encoding is more suitable for encoding monotonically increasing or decreasing sequence data, and is not recommended for sequence data with large fluctuations. - -Second-order differential encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](#iotdb-query-statement) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increase or decrease, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. - -* RLE - -Run-length encoding is more suitable for storing sequence with continuous integer values, and is not recommended for sequence data with most of the time different values. - -Run-length encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](#iotdb-query-statement) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increase or decrease, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. - -* GORILLA - -GORILLA encoding is more suitable for floating-point sequence with similar values and is not recommended for sequence data with large fluctuations. - -* Correspondence between data type and encoding - -The four encodings described in the previous sections are applicable to different data types. If the correspondence is wrong, the time series cannot be created correctly. The correspondence between the data type and its supported encodings is summarized in Table 2-3. - -
**Table 2-3 The correspondence between the data type and its supported encodings** - -|Data Type |Supported Encoding| -|:---:|:---:| -|BOOLEAN| PLAIN, RLE| -|INT32 |PLAIN, RLE, TS_2DIFF| -|INT64 |PLAIN, RLE, TS_2DIFF| -|FLOAT |PLAIN, RLE, TS_2DIFF, GORILLA| -|DOUBLE |PLAIN, RLE, TS_2DIFF, GORILLA| -|TEXT |PLAIN| - -
- -## Compression - -When the time series is written and encoded as binary data according to the specified type, IoTDB compresses the data using compression technology to further improve space storage efficiency. Although both encoding and compression are designed to improve storage efficiency, encoding techniques are usually only available for specific data types (e.g., second-order differential encoding is only suitable for INT32 or INT64 data type, and storing floating-point numbers requires multiplying them by 10m to convert to integers), after which the data is converted to a binary stream. The compression method (SNAPPY) compresses the binary stream, so the use of the compression method is no longer limited by the data type. - -IoTDB allows you to specify the compression method of the column when creating a time series. IoTDB now supports two kinds of compression: UNCOMPRESSED (no compression) and SNAPPY compression. The specified syntax for compression is detailed in [Create Timeseries Statement](#chapter-5-iotdb-sql-documentation). \ No newline at end of file diff --git a/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/2-Data Type.md b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/2-Data Type.md new file mode 100644 index 0000000000..cbb5ebc580 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/2-Data Type.md @@ -0,0 +1,38 @@ + + +# Chapter 2: Concept + +## Data Type +IoTDB supports six data types in total: BOOLEAN (Boolean), INT32 (Integer), INT64 (Long Integer), FLOAT (Single Precision Floating Point), DOUBLE (Double Precision Floating Point), TEXT (String). + + +The time series of FLOAT and DOUBLE type can specify (MAX\_POINT\_NUMBER, see [this page](#iotdb-query-statement) for more information on how to specify), which is the number of digits after the decimal point of the floating point number, if the encoding method is [RLE](#encoding) or [TS\_2DIFF](#encoding) (Refer to [Create Timeseries Statement](#chapter-5-iotdb-sql-documentation) for more information on how to specify). If MAX\_POINT\_NUMBER is not specified, the system will use [float\_precision](#encoding) in the configuration file "tsfile-format.properties" for configuration for the configuration method. + +* For Float data value, The data range is (-Integer.MAX_VALUE, Integer.MAX_VALUE), rather than Float.MAX_VALUE, and the max_point_number is 19, it is because of the limition of function Math.round(float) in Java. +* For Double data value, The data range is (-Long.MAX_VALUE, Long.MAX_VALUE), rather than Double.MAX_VALUE, and the max_point_number is 19, it is because of the limition of function Math.round(double) in Java (Long.MAX_VALUE=9.22E18). + +When the data type of data input by the user in the system does not correspond to the data type of the time series, the system will report type errors. As shown below, the second-order difference encoding does not support the Boolean type: + +``` +IoTDB> create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF +error: encoding TS_2DIFF does not support BOOLEAN +``` diff --git a/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/3-Encoding.md b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/3-Encoding.md new file mode 100644 index 0000000000..f870053aa2 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/3-Encoding.md @@ -0,0 +1,62 @@ + + +# Chapter 2: Concept + +## Encoding +In order to improve the efficiency of data storage, it is necessary to encode data during data writing, thereby reducing the amount of disk space used. In the process of writing and reading data, the amount of data involved in the I/O operations can be reduced to improve performance. IoTDB supports four encoding methods for different types of data: + +* PLAIN + +PLAIN encoding, the default encoding mode, i.e, no encoding, supports multiple data types. It has high compression and decompression efficiency while suffering from low space storage efficiency. + +* TS_2DIFF + +Second-order differential encoding is more suitable for encoding monotonically increasing or decreasing sequence data, and is not recommended for sequence data with large fluctuations. + +Second-order differential encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](#iotdb-query-statement) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increase or decrease, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. + +* RLE + +Run-length encoding is more suitable for storing sequence with continuous integer values, and is not recommended for sequence data with most of the time different values. + +Run-length encoding can also be used to encode floating-point numbers, but it is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this page](#iotdb-query-statement) for more information on how to specify) when creating time series. It is more suitable for storing sequence data where floating-point values appear continuously, monotonously increase or decrease, and it is not suitable for storing sequence data with high precision requirements after the decimal point or with large fluctuations. + +* GORILLA + +GORILLA encoding is more suitable for floating-point sequence with similar values and is not recommended for sequence data with large fluctuations. + +* Correspondence between data type and encoding + +The four encodings described in the previous sections are applicable to different data types. If the correspondence is wrong, the time series cannot be created correctly. The correspondence between the data type and its supported encodings is summarized in Table 2-3. + +
**Table 2-3 The correspondence between the data type and its supported encodings** + +|Data Type |Supported Encoding| +|:---:|:---:| +|BOOLEAN| PLAIN, RLE| +|INT32 |PLAIN, RLE, TS_2DIFF| +|INT64 |PLAIN, RLE, TS_2DIFF| +|FLOAT |PLAIN, RLE, TS_2DIFF, GORILLA| +|DOUBLE |PLAIN, RLE, TS_2DIFF, GORILLA| +|TEXT |PLAIN| + +
diff --git a/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/4-Compression.md b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/4-Compression.md new file mode 100644 index 0000000000..bec2a5b1c7 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/2-Concept Key Concepts and Terminology/4-Compression.md @@ -0,0 +1,28 @@ + + +# Chapter 2: Concept + +## Compression + +When the time series is written and encoded as binary data according to the specified type, IoTDB compresses the data using compression technology to further improve space storage efficiency. Although both encoding and compression are designed to improve storage efficiency, encoding techniques are usually only available for specific data types (e.g., second-order differential encoding is only suitable for INT32 or INT64 data type, and storing floating-point numbers requires multiplying them by 10m to convert to integers), after which the data is converted to a binary stream. The compression method (SNAPPY) compresses the binary stream, so the use of the compression method is no longer limited by the data type. + +IoTDB allows you to specify the compression method of the column when creating a time series. IoTDB now supports two kinds of compression: UNCOMPRESSED (no compression) and SNAPPY compression. The specified syntax for compression is detailed in [Create Timeseries Statement](#chapter-5-iotdb-sql-documentation). \ No newline at end of file diff --git a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/1-Sample Data.md b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/1-Sample Data.md new file mode 100644 index 0000000000..75fd5ae95c --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/1-Sample Data.md @@ -0,0 +1,28 @@ + + +# Chapter 3: Operation Manual + +## Sample Data + +To make this manual more practical, we will use a specific scenario example to illustrate how to operate IoTDB databases at all stages of use. See [this page](Material-SampleData) for a look. For your convenience, we also provide you with a sample data file in real scenario to import into the IoTDB system for trial and operation. + +Download file: [IoTDB-SampleData.txt](sampledata文件下载链接). diff --git a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/2-Data Model Selection.md b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/2-Data Model Selection.md new file mode 100644 index 0000000000..70e8c540af --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/2-Data Model Selection.md @@ -0,0 +1,115 @@ + + +# Chapter 3: Operation Manual + +## Data Model Selection + +Before importing data to IoTDB, we first select the appropriate data storage model according to the [sample data](Material-SampleData), and then create the storage group and timeseries using [SET STORAGE GROUP](Chapter5,setstoragegroup) statement and [CREATE TIMESERIES](Chapter5,createtimeseries) statement respectively. + +### Storage Model Selection +According to the data attribute layers described in [sample data](Material-SampleData), we can express it as an attribute hierarchy structure based on the coverage of attributes and the subordinate relationship between them, as shown in Figure 3.1 below. Its hierarchical relationship is: power group layer - power plant layer - device layer - sensor layer. ROOT is the root node, and each node of sensor layer is called a leaf node. In the process of using IoTDB, you can directly connect the attributes on the path from ROOT node to each leaf node with ".", thus forming the name of a timeseries in IoTDB. For example, The left-most path in Figure 3.1 can generate a timeseries named `ROOT.ln.wf01.wt01.status`. + +
+ +**Figure 3.1 Attribute hierarchy structure**
+ +After getting the name of the timeseries, we need to set up the storage group according to the actual scenario and scale of the data. Because in the scenario of this chapter data is usually arrived in the unit of groups (i.e., data may be across electric fields and devices), in order to avoid frequent switching of IO when writing data, and to meet the user's requirement of physical isolation of data in the unit of groups, we set the storage group at the group layer. + +### Storage Group Creation +After selecting the storage model, according to which we can set up the corresponding storage group. The SQL statements for creating storage groups are as follows: + +``` +IoTDB > set storage group to root.ln +IoTDB > set storage group to root.sgcc +``` + +We can thus create two storage groups using the above two SQL statements. + +It is worth noting that when the path itself or the parent/child layer of the path is already set as a storage group, the path is then not allowed to be set as a storage group. For example, it is not feasible to set `root.ln.wf01` as a storage group when there exist two storage groups `root.ln` and `root.sgcc`. The system will give the corresponding error prompt as shown below: + +``` +IoTDB> set storage group to root.ln.wf01 +error: The prefix of root.ln.wf01 has been set to the storage group. +``` + +### Show Storage Group +After the storage group is created, we can use the [SHOW STORAGE GROUP](Chapter5,showstoragegroup) statement to view all the storage groups. The SQL statement is as follows: + +``` +IoTDB> show storage group +``` + +The result is as follows: +
+ +### Timeseries Creation +According to the storage model selected before, we can create corresponding timeseries in the two storage groups respectively. The SQL statements for creating timeseries are as follows: + +``` +IoTDB > create timeseries root.ln.wf01.wt01.status with datatype=BOOLEAN,encoding=PLAIN +IoTDB > create timeseries root.ln.wf01.wt01.temperature with datatype=FLOAT,encoding=RLE +IoTDB > create timeseries root.ln.wf02.wt02.hardware with datatype=TEXT,encoding=PLAIN +IoTDB > create timeseries root.ln.wf02.wt02.status with datatype=BOOLEAN,encoding=PLAIN +IoTDB > create timeseries root.sgcc.wf03.wt01.status with datatype=BOOLEAN,encoding=PLAIN +IoTDB > create timeseries root.sgcc.wf03.wt01.temperature with datatype=FLOAT,encoding=RLE +``` + +It is worth noting that when in the CRATE TIMESERIES statement the encoding method conflicts with the data type, the system will give the corresponding error prompt as shown below: + +``` +IoTDB> create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF +error: encoding TS_2DIFF does not support BOOLEAN +``` + +Please refer to [Encoding](Chapter2,encoding) for correspondence between data type and encoding. + +### Show Timeseries + +Currently, IoTDB supports two ways of viewing timeseries: + +* SHOW TIMESERIES statement presents all timeseries information in JSON form +* SHOW TIMESERIES <`Path`> statement returns all timeseries information and the total number of timeseries under the given <`Path`> in tabular form. timeseries information includes: timeseries path, storage group it belongs to, data type, encoding type. <`Path`> needs to be a prefix path or a path with star or a timeseries path. SQL statements are as follows: + +``` +IoTDB> show timeseries root +IoTDB> show timeseries root.ln +``` + +The results are shown below respectly: + +
+
+ +It is worth noting that when the path queries does not exist, the system will give the corresponding error prompt as shown below: + +``` +IoTDB> show timeseries root.ln.wf03 +Msg: Failed to fetch timeseries root.ln.wf03's metadata because: Timeseries does not exist. +``` + +### Precautions + +Version 0.7.0 imposes some limitations on the scale of data that users can operate: + +Limit 1: Assuming that the JVM memory allocated to IoTDB at runtime is p and the user-defined size of data in memory written to disk ([group\_size\_in\_byte](Chap4group_size_in_byte)) is Q, then the number of storage groups should not exceed p/q. + +Limit 2: The number of timeseries should not exceed the ratio of JVM memory allocated to IoTDB at run time to 20KB. diff --git a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/3-Data Import.md b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/3-Data Import.md new file mode 100644 index 0000000000..13a34b6765 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/3-Data Import.md @@ -0,0 +1,87 @@ + + +# Chapter 3: Operation Manual + +## Data Import +### Import Historical Data + +This feature is not supported in version 0.7.0. + +### Import Real-time Data + +IoTDB provides users with a variety of ways to insert real-time data, such as directly inputting [INSERT SQL statement](chapter5.InsertRecordStatement) in [Cli/Shell tools](cli-page), or using [Java JDBC](Java-api-page,commingsoon) to perform single or batch execution of [INSERT SQL statement](chapter5.InsertRecordStatement). + +This section mainly introduces the use of [INSERT SQL statement](chapter5.InsertRecordStatement) for real-time data import in the scenario. See Section 7.1.3.1 for a detailed syntax of [INSERT SQL statement](chapter5.InsertRecordStatement). + +#### Use of INSERT Statements +The [INSERT SQL statement](chapter5.InsertRecordStatement) statement can be used to insert data into one or more specified timeseries that have been created. For each point of data inserted, it consists of a [timestamp](chap2,timestamp) and a sensor acquisition value of a numerical type (see [Data Type](Chapter2datatype)). + +In the scenario of this section, take two timeseries `root.ln.wf02.wt02.status` and `root.ln.wf02.wt02.hardware` as an example, and their data types are BOOLEAN and TEXT, respectively. + +The sample code for single column data insertion is as follows: +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) +IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1, "v1") +``` + +The above example code inserts the long integer timestamp and the value "true" into the timeseries `root.ln.wf02.wt02.status` and inserts the long integer timestamp and the value "v1" into the timeseries `root.ln.wf02.wt02.hardware`. When the execution is successful, a prompt "execute successfully" will appear to indicate that the data insertion has been completed. + +> Note: In IoTDB, TEXT type data can be represented by single and double quotation marks. The insertion statement above uses double quotation marks for TEXT type data. The following example will use single quotation marks for TEXT type data. + +The INSERT statement can also support the insertion of multi-column data at the same time point. The sample code of inserting the values of the two timeseries at the same time point '2' is as follows: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp, status, hardware) VALUES (2, false, 'v2') +``` + +After inserting the data, we can simply query the inserted data using the SELECT statement: + +``` +IoTDB > select * from root.ln.wf02 where time < 3 +``` + +The result is shown below. From the query results, it can be seen that the insertion statements of single column and multi column data are performed correctly. + +
+ +### Error Handling of INSERT Statements +If the user inserts data into a non-existent timeseries, for example, execute the following commands: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp, temperature) values(1,"v1") +``` + +Because `root.ln.wf02.wt02. temperature` does not exist, the system will return the following ERROR information: + +``` +error: Timeseries root.ln.wf02.wt02.temperature does not exist. +``` +If the data type inserted by the user is inconsistent with the corresponding data type of the timeseries, for example, execute the following command: + +``` +IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1,100) +``` +The system will return the following ERROR information: + +``` +error: The TEXT data type should be covered by " or ' +``` diff --git a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual.md b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/4-Data Query.md similarity index 55% rename from docs/Documentation/UserGuideV0.7.0/3-Operation Manual.md rename to docs/Documentation/UserGuideV0.7.0/3-Operation Manual/4-Data Query.md index aafdd88af6..845a287b70 100644 --- a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual.md +++ b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/4-Data Query.md @@ -19,229 +19,8 @@ --> - - -- [Chapter 3: Operation Manual](#chapter-3-operation-manual) - - [Sample Data](#sample-data) - - [Data Model Selection](#data-model-selection) - - [Storage Model Selection](#storage-model-selection) - - [Storage Group Creation](#storage-group-creation) - - [Show Storage Group](#show-storage-group) - - [Timeseries Creation](#timeseries-creation) - - [Show Timeseries](#show-timeseries) - - [Precautions](#precautions) - - [Data Import](#data-import) - - [Import Historical Data](#import-historical-data) - - [Import Real-time Data](#import-real-time-data) - - [Use of INSERT Statements](#use-of-insert-statements) - - [Error Handling of INSERT Statements](#error-handling-of-insert-statements) - - [Data Query](#data-query) - - [Time Slice Query](#time-slice-query) - - [Select a Column of Data Based on a Time Interval](#select-a-column-of-data-based-on-a-time-interval) - - [Select Multiple Columns of Data Based on a Time Interval](#select-multiple-columns-of-data-based-on-a-time-interval) - - [Select Multiple Columns of Data for the Same Device According to Multiple Time Intervals](#select-multiple-columns-of-data-for-the-same-device-according-to-multiple-time-intervals) - - [Choose Multiple Columns of Data for Different Devices According to Multiple Time Intervals](#choose-multiple-columns-of-data-for-different-devices-according-to-multiple-time-intervals) - - [Down-Frequency Aggregate Query](#down-frequency-aggregate-query) - - [Down-Frequency Aggregate Query without Specifying the Time Axis Origin Position](#down-frequency-aggregate-query-without-specifying-the-time-axis-origin-position) - - [Down-Frequency Aggregate Query Specifying the Time Axis Origin Position](#down-frequency-aggregate-query-specifying-the-time-axis-origin-position) - - [Down-Frequency Aggregate Query Specifying the Time Filtering Conditions](#down-frequency-aggregate-query-specifying-the-time-filtering-conditions) - - [Automated Fill](#automated-fill) - - [Fill Function](#fill-function) - - [Correspondence between Data Type and Fill Method](#correspondence-between-data-type-and-fill-method) - - [Row and Column Control over Query Results](#row-and-column-control-over-query-results) - - [Row Control over Query Results](#row-control-over-query-results) - - [Column Control over Query Results](#column-control-over-query-results) - - [Row and Column Control over Query Results](#row-and-column-control-over-query-results-1) - - [Error Handling](#error-handling) - - [Data Maintenance](#data-maintenance) - - [Data Update](#data-update) - - [Update Single Timeseries](#update-single-timeseries) - - [Data Deletion](#data-deletion) - - [Delete Single Timeseries](#delete-single-timeseries) - - [Delete Multiple Timeseries](#delete-multiple-timeseries) - - [Priviledge Management](#priviledge-management) - - [Basic Concepts](#basic-concepts) - - [User](#user) - - [Priviledge](#priviledge) - - [Role](#role) - - [Default User](#default-user) - - [Priviledge Management Operation Examples](#priviledge-management-operation-examples) - - [Create User](#create-user) - - [Grant User Priviledge](#grant-user-priviledge) - - [Other Instructions](#other-instructions) - - [The Relationship among Users, Priviledges and Roles](#the-relationship-among-users-priviledges-and-roles) - - [List of Priviledges Included in the System](#list-of-priviledges-included-in-the-system) - - [Username Restrictions](#username-restrictions) - - [Password Restrictions](#password-restrictions) - - [Role Name Restrictions](#role-name-restrictions) - - # Chapter 3: Operation Manual -## Sample Data - -To make this manual more practical, we will use a specific scenario example to illustrate how to operate IoTDB databases at all stages of use. See [this page](Material-SampleData) for a look. For your convenience, we also provide you with a sample data file in real scenario to import into the IoTDB system for trial and operation. - -Download file: [IoTDB-SampleData.txt](sampledata文件下载链接). - -## Data Model Selection - -Before importing data to IoTDB, we first select the appropriate data storage model according to the [sample data](Material-SampleData), and then create the storage group and timeseries using [SET STORAGE GROUP](Chapter5,setstoragegroup) statement and [CREATE TIMESERIES](Chapter5,createtimeseries) statement respectively. - -### Storage Model Selection -According to the data attribute layers described in [sample data](Material-SampleData), we can express it as an attribute hierarchy structure based on the coverage of attributes and the subordinate relationship between them, as shown in Figure 3.1 below. Its hierarchical relationship is: power group layer - power plant layer - device layer - sensor layer. ROOT is the root node, and each node of sensor layer is called a leaf node. In the process of using IoTDB, you can directly connect the attributes on the path from ROOT node to each leaf node with ".", thus forming the name of a timeseries in IoTDB. For example, The left-most path in Figure 3.1 can generate a timeseries named `ROOT.ln.wf01.wt01.status`. - -
- -**Figure 3.1 Attribute hierarchy structure**
- -After getting the name of the timeseries, we need to set up the storage group according to the actual scenario and scale of the data. Because in the scenario of this chapter data is usually arrived in the unit of groups (i.e., data may be across electric fields and devices), in order to avoid frequent switching of IO when writing data, and to meet the user's requirement of physical isolation of data in the unit of groups, we set the storage group at the group layer. - -### Storage Group Creation -After selecting the storage model, according to which we can set up the corresponding storage group. The SQL statements for creating storage groups are as follows: - -``` -IoTDB > set storage group to root.ln -IoTDB > set storage group to root.sgcc -``` - -We can thus create two storage groups using the above two SQL statements. - -It is worth noting that when the path itself or the parent/child layer of the path is already set as a storage group, the path is then not allowed to be set as a storage group. For example, it is not feasible to set `root.ln.wf01` as a storage group when there exist two storage groups `root.ln` and `root.sgcc`. The system will give the corresponding error prompt as shown below: - -``` -IoTDB> set storage group to root.ln.wf01 -error: The prefix of root.ln.wf01 has been set to the storage group. -``` - -### Show Storage Group -After the storage group is created, we can use the [SHOW STORAGE GROUP](Chapter5,showstoragegroup) statement to view all the storage groups. The SQL statement is as follows: - -``` -IoTDB> show storage group -``` - -The result is as follows: -
- -### Timeseries Creation -According to the storage model selected before, we can create corresponding timeseries in the two storage groups respectively. The SQL statements for creating timeseries are as follows: - -``` -IoTDB > create timeseries root.ln.wf01.wt01.status with datatype=BOOLEAN,encoding=PLAIN -IoTDB > create timeseries root.ln.wf01.wt01.temperature with datatype=FLOAT,encoding=RLE -IoTDB > create timeseries root.ln.wf02.wt02.hardware with datatype=TEXT,encoding=PLAIN -IoTDB > create timeseries root.ln.wf02.wt02.status with datatype=BOOLEAN,encoding=PLAIN -IoTDB > create timeseries root.sgcc.wf03.wt01.status with datatype=BOOLEAN,encoding=PLAIN -IoTDB > create timeseries root.sgcc.wf03.wt01.temperature with datatype=FLOAT,encoding=RLE -``` - -It is worth noting that when in the CRATE TIMESERIES statement the encoding method conflicts with the data type, the system will give the corresponding error prompt as shown below: - -``` -IoTDB> create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODING=TS_2DIFF -error: encoding TS_2DIFF does not support BOOLEAN -``` - -Please refer to [Encoding](Chapter2,encoding) for correspondence between data type and encoding. - -### Show Timeseries - -Currently, IoTDB supports two ways of viewing timeseries: - -* SHOW TIMESERIES statement presents all timeseries information in JSON form -* SHOW TIMESERIES <`Path`> statement returns all timeseries information and the total number of timeseries under the given <`Path`> in tabular form. timeseries information includes: timeseries path, storage group it belongs to, data type, encoding type. <`Path`> needs to be a prefix path or a path with star or a timeseries path. SQL statements are as follows: - -``` -IoTDB> show timeseries root -IoTDB> show timeseries root.ln -``` - -The results are shown below respectly: - -
-
- -It is worth noting that when the path queries does not exist, the system will give the corresponding error prompt as shown below: - -``` -IoTDB> show timeseries root.ln.wf03 -Msg: Failed to fetch timeseries root.ln.wf03's metadata because: Timeseries does not exist. -``` - -### Precautions - -Version 0.7.0 imposes some limitations on the scale of data that users can operate: - -Limit 1: Assuming that the JVM memory allocated to IoTDB at runtime is p and the user-defined size of data in memory written to disk ([group\_size\_in\_byte](Chap4group_size_in_byte)) is Q, then the number of storage groups should not exceed p/q. - -Limit 2: The number of timeseries should not exceed the ratio of JVM memory allocated to IoTDB at run time to 20KB. - -## Data Import -### Import Historical Data - -This feature is not supported in version 0.7.0. - -### Import Real-time Data - -IoTDB provides users with a variety of ways to insert real-time data, such as directly inputting [INSERT SQL statement](chapter5.InsertRecordStatement) in [Cli/Shell tools](cli-page), or using [Java JDBC](Java-api-page,commingsoon) to perform single or batch execution of [INSERT SQL statement](chapter5.InsertRecordStatement). - -This section mainly introduces the use of [INSERT SQL statement](chapter5.InsertRecordStatement) for real-time data import in the scenario. See Section 7.1.3.1 for a detailed syntax of [INSERT SQL statement](chapter5.InsertRecordStatement). - -#### Use of INSERT Statements -The [INSERT SQL statement](chapter5.InsertRecordStatement) statement can be used to insert data into one or more specified timeseries that have been created. For each point of data inserted, it consists of a [timestamp](chap2,timestamp) and a sensor acquisition value of a numerical type (see [Data Type](Chapter2datatype)). - -In the scenario of this section, take two timeseries `root.ln.wf02.wt02.status` and `root.ln.wf02.wt02.hardware` as an example, and their data types are BOOLEAN and TEXT, respectively. - -The sample code for single column data insertion is as follows: -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) -IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1, "v1") -``` - -The above example code inserts the long integer timestamp and the value "true" into the timeseries `root.ln.wf02.wt02.status` and inserts the long integer timestamp and the value "v1" into the timeseries `root.ln.wf02.wt02.hardware`. When the execution is successful, a prompt "execute successfully" will appear to indicate that the data insertion has been completed. - -> Note: In IoTDB, TEXT type data can be represented by single and double quotation marks. The insertion statement above uses double quotation marks for TEXT type data. The following example will use single quotation marks for TEXT type data. - -The INSERT statement can also support the insertion of multi-column data at the same time point. The sample code of inserting the values of the two timeseries at the same time point '2' is as follows: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp, status, hardware) VALUES (2, false, 'v2') -``` - -After inserting the data, we can simply query the inserted data using the SELECT statement: - -``` -IoTDB > select * from root.ln.wf02 where time < 3 -``` - -The result is shown below. From the query results, it can be seen that the insertion statements of single column and multi column data are performed correctly. - -
- -### Error Handling of INSERT Statements -If the user inserts data into a non-existent timeseries, for example, execute the following commands: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp, temperature) values(1,"v1") -``` - -Because `root.ln.wf02.wt02. temperature` does not exist, the system will return the following ERROR information: - -``` -error: Timeseries root.ln.wf02.wt02.temperature does not exist. -``` -If the data type inserted by the user is inconsistent with the corresponding data type of the timeseries, for example, execute the following command: - -``` -IoTDB > insert into root.ln.wf02.wt02(timestamp,hardware) values(1,100) -``` -The system will return the following ERROR information: - -``` -error: The TEXT data type should be covered by " or ' -``` - ## Data Query ### Time Slice Query @@ -711,165 +490,3 @@ select * from root.ln.wf01.wt01 where time > 2017-11-01T00:05:00.000 and time < ``` The SQL statement will not be executed and the corresponding error prompt is given as follows:
- -## Data Maintenance -### Data Update - -Users can use [UPDATE statements](Chap5,updatestatement) to update data over a period of time in a specified timeseries. When updating data, users can select a timeseries to be updated (version 0.7.0 does not support multiple timeseries updates) and specify a time point or period to be updated (version 0.7.0 must have time filtering conditions). - -In a JAVA programming environment, you can use the [Java JDBC](Java-api-page,commingsoon) to execute single or batch UPDATE statements. - -#### Update Single Timeseries -Taking the power supply status of ln group wf02 plant wt02 device as an example, there exists such a usage scenario: - -After data access and analysis, it is found that the power supply status from 2017-11-01 15:54:00 to 2017-11-01 16:00:00 is true, but the actual power supply status is abnormal. You need to update the status to false during this period. The SQL statement for this operation is: - -``` -update root.ln.wf02 SET wt02.status = false where time <=2017-11-01T16:00:00 and time >= 2017-11-01T15:54:00 -``` -It should be noted that when the updated data type does not match the actual data type, IoTDB will give the corresponding error prompt as shown below: - -``` -IoTDB> update root.ln.wf02 set wt02.status = 1205 where time < now() -error: The BOOLEAN data type should be true/TRUE or false/FALSE -``` -When the updated path does not exist, IoTDB will give the corresponding error prompt as shown below: - -``` -IoTDB> update root.ln.wf02 set wt02.sta = false where time < now() -error: do not select any existing path -``` -### Data Deletion - -Users can delete data that meet the deletion condition in the specified timeseries by using the [DELETE statement](Chap5,deletestatement). When deleting data, users can select one or more timeseries paths, prefix paths, or paths with star to delete data before a certain time (version 0.7.0 does not support the deletion of data within a closed time interval). - -In a JAVA programming environment, you can use the [Java JDBC](Java-api-page,commingsoon) to execute single or batch UPDATE statements. - -#### Delete Single Timeseries -Taking ln Group as an example, there exists such a usage scenario: - -The wf02 plant's wt02 device has many segments of errors in its power supply status before 2017-11-01 16:26:00, and the data cannot be analyzed correctly. The erroneous data affected the correlation analysis with other devices. At this point, the data before this time point needs to be deleted. The SQL statement for this operation is - -``` -delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; -``` - -#### Delete Multiple Timeseries -When both the power supply status and hardware version of the ln group wf02 plant wt02 device before 2017-11-01 16:26:00 need to be deleted, the prefix path with broader meaning (see Section 3.1.6 of this manual) or the path with star can be used to delete the data. The SQL statement for this operation is: - -``` -delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; -``` -or - -``` -delete from root.ln.wf02.wt02.* where time <= 2017-11-01T16:26:00; -``` -It should be noted that when the deleted path does not exist, IoTDB will give the corresponding error prompt as shown below: - -``` -IoTDB> delete from root.ln.wf03.wt02.status where time < now() -error: TimeSeries does not exist and cannot be delete data -``` - -## Priviledge Management -IoTDB provides users with priviledge management operations, so as to ensure data security. - -We will show you basic user priviledge management operations through the following specific examples. Detailed SQL syntax and usage details can be found in [Chapter5.SQL Documentation](chap5). At the same time, in the JAVA programming environment, you can use the [Java JDBC](Java-api-page,commingsoon) to execute priviledge management statements in a single or batch mode. - -### Basic Concepts -#### User -The user is the legal user of the database. A user corresponds to a unique username and has a password as a means of authentication. Before using a database, a person must first provide a legitimate username and password to make himself/herself a user. - -#### Priviledge -The database provides a variety of operations, and not all users can perform all operations. If a user can perform an operation, the user is said to have the priviledge to perform the operation. Priviledges can be divided into data management priviledge (such as adding, deleting and modifying data) and authority management priviledge (such as creation and deletion of users and roles, granting and revoking of priviledges, etc.). Data management priviledge often needs a path to limit its effective range, which is a subtree rooted at the path's corresponding node. - -#### Role -A role is a set of priviledges and has a unique role name as an identifier. A user usually corresponds to a real identity (such as a traffic dispatcher), while a real identity may correspond to multiple users. These users with the same real identity tend to have the same priviledges. Roles are abstractions that can unify the management of such priviledges. - -#### Default User -There is a default user in IoTDB after the initial installation: root, and the default password is root. This user is an administrator user, who cannot be deleted and has all the priviledges. Neither can new priviledges be granted to the root user nor can priviledges owned by the root user be deleted. - -### Priviledge Management Operation Examples -According to the [sample data](material-sampledata), the sample data of IoTDB may belong to different power generation groups such as ln, sgcc, etc. Different power generation groups do not want others to obtain their own database data, so we need to have data priviledge isolated at the group layer. - -#### Create User - -We can create two users for ln and sgcc groups, named ln\_write\_user and sgcc\_write\_user, with both passwords being write\_pwd. The SQL statement is: - -``` -CREATE USER ln_write_user write_pwd -CREATE USER sgcc_write_user write_pwd -``` -Then use the following SQL statement to show the user: - -``` -LIST USER -``` -As can be seen from the result shown below, the two users have been created: - -
- -#### Grant User Priviledge -At this point, although two users have been created, they do not have any priviledges, so they can not operate on the database. For example, we use ln_write_user to write data in the database, the SQL statement is: - -``` -INSERT INTO root.ln.wf01.wt01(timestamp,status) values(1509465600000,true) -``` -The SQL statement will not be executed and the corresponding error prompt is given as follows: - -
- -Now, we grant the two users write priviledges to the corresponding storage groups, and try to write data again. The SQL statement is: - -``` -GRANT USER ln_write_user PRIVILEGES 'INSERT_TIMESERIES' on root.ln -GRANT USER sgcc_write_user PRIVILEGES 'INSERT_TIMESERIES' on root.sgcc -INSERT INTO root.ln.wf01.wt01(timestamp, status) values(1509465600000, true) -``` -The execution result is as follows: -
- -### Other Instructions -#### The Relationship among Users, Priviledges and Roles - -A Role is a set of priviledges, and priviledges and roles are both attributes of users. That is, a role can have several priviledges and a user can have several roles and priviledges (called the user's own priviledges). - -At present, there is no conflicting priviledge in IoTDB, so the real priviledges of a user is the union of the user's own priviledges and the priviledges of the user's roles. That is to say, to determine whether a user can perform an operation, it depends on whether one of the user's own priviledges or the priviledges of the user's roles permits the operation. The user's own priviledges and priviledges of the user's roles may overlap, but it does not matter. - -It should be noted that if users have a priviledge (corresponding to operation A) themselves and their roles contain the same priviledge, then revoking the priviledge from the users themselves alone can not prohibit the users from performing operation A, since it is necessary to revoke the priviledge from the role, or revoke the role from the user. Similarly, revoking the priviledge from the users's roles alone can not prohibit the users from performing operation A. - -At the same time, changes to roles are immediately reflected on all users who own the roles. For example, adding certain priviledges to roles will immediately give all users who own the roles corresponding priviledges, and deleting certain priviledges will also deprive the corresponding users of the priviledges (unless the users themselves have the priviledges). - -#### List of Priviledges Included in the System - -
**Table 3-8 List of Priviledges Included in the System** - -|Priviledge Name|Interpretation| -|:---|:---| -|SET\_STORAGE\_GROUP|create timeseries; set storage groups; path dependent| -|INSERT\_TIMESERIES|insert data; path dependent| -|UPDATE\_TIMESERIES|update data; path dependent| -|READ\_TIMESERIES|query data; path dependent| -|DELETE\_TIMESERIES|delete data or timeseries; path dependent| -|CREATE\_USER|create users; path independent| -|DELETE\_USER|delete users; path independent| -|MODIFY\_PASSWORD|modify passwords for all users; path independent; (Those who do not have this priviledge can still change their own asswords. )| -|LIST\_USER|list all users; list a user's priviledges; list a user's roles with three kinds of operation priviledges; path independent| -|GRANT\_USER\_PRIVILEGE|grant user priviledges; path independent| -|REVOKE\_USER\_PRIVILEGE|revoke user priviledges; path independent| -|GRANT\_USER\_ROLE|grant user roles; path independent| -|REVOKE\_USER\_ROLE|revoke user roles; path independent| -|CREATE\_ROLE|create roles; path independent| -|DELETE\_ROLE|delete roles; path independent| -|LIST\_ROLE|list all roles; list the priviledges of a role; list the three kinds of operation priviledges of all users owning a role; path independent| -|GRANT\_ROLE\_PRIVILEGE|grant role priviledges; path independent| -|REVOKE\_ROLE\_PRIVILEGE|revoke role priviledges; path independent| -
- -#### Username Restrictions -IoTDB specifies that the character length of a username should not be less than 4, and the username cannot contain spaces. -#### Password Restrictions -IoTDB specifies that the character length of a password should not be less than 4, and the password cannot contain spaces. The password is encrypted with MD5. -#### Role Name Restrictions -IoTDB specifies that the character length of a role name should not be less than 4, and the role name cannot contain spaces. diff --git a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/5-Data Maintenance.md b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/5-Data Maintenance.md new file mode 100644 index 0000000000..16075e64a3 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/5-Data Maintenance.md @@ -0,0 +1,82 @@ + + +# Chapter 3: Operation Manual + +## Data Maintenance +### Data Update + +Users can use [UPDATE statements](Chap5,updatestatement) to update data over a period of time in a specified timeseries. When updating data, users can select a timeseries to be updated (version 0.7.0 does not support multiple timeseries updates) and specify a time point or period to be updated (version 0.7.0 must have time filtering conditions). + +In a JAVA programming environment, you can use the [Java JDBC](Java-api-page,commingsoon) to execute single or batch UPDATE statements. + +#### Update Single Timeseries +Taking the power supply status of ln group wf02 plant wt02 device as an example, there exists such a usage scenario: + +After data access and analysis, it is found that the power supply status from 2017-11-01 15:54:00 to 2017-11-01 16:00:00 is true, but the actual power supply status is abnormal. You need to update the status to false during this period. The SQL statement for this operation is: + +``` +update root.ln.wf02 SET wt02.status = false where time <=2017-11-01T16:00:00 and time >= 2017-11-01T15:54:00 +``` +It should be noted that when the updated data type does not match the actual data type, IoTDB will give the corresponding error prompt as shown below: + +``` +IoTDB> update root.ln.wf02 set wt02.status = 1205 where time < now() +error: The BOOLEAN data type should be true/TRUE or false/FALSE +``` +When the updated path does not exist, IoTDB will give the corresponding error prompt as shown below: + +``` +IoTDB> update root.ln.wf02 set wt02.sta = false where time < now() +error: do not select any existing path +``` +### Data Deletion + +Users can delete data that meet the deletion condition in the specified timeseries by using the [DELETE statement](Chap5,deletestatement). When deleting data, users can select one or more timeseries paths, prefix paths, or paths with star to delete data before a certain time (version 0.7.0 does not support the deletion of data within a closed time interval). + +In a JAVA programming environment, you can use the [Java JDBC](Java-api-page,commingsoon) to execute single or batch UPDATE statements. + +#### Delete Single Timeseries +Taking ln Group as an example, there exists such a usage scenario: + +The wf02 plant's wt02 device has many segments of errors in its power supply status before 2017-11-01 16:26:00, and the data cannot be analyzed correctly. The erroneous data affected the correlation analysis with other devices. At this point, the data before this time point needs to be deleted. The SQL statement for this operation is + +``` +delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; +``` + +#### Delete Multiple Timeseries +When both the power supply status and hardware version of the ln group wf02 plant wt02 device before 2017-11-01 16:26:00 need to be deleted, the prefix path with broader meaning (see Section 3.1.6 of this manual) or the path with star can be used to delete the data. The SQL statement for this operation is: + +``` +delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; +``` +or + +``` +delete from root.ln.wf02.wt02.* where time <= 2017-11-01T16:26:00; +``` +It should be noted that when the deleted path does not exist, IoTDB will give the corresponding error prompt as shown below: + +``` +IoTDB> delete from root.ln.wf03.wt02.status where time < now() +error: TimeSeries does not exist and cannot be delete data +``` diff --git a/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/6-Priviledge Management.md b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/6-Priviledge Management.md new file mode 100644 index 0000000000..644dacf95e --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/3-Operation Manual/6-Priviledge Management.md @@ -0,0 +1,124 @@ + + +# Chapter 3: Operation Manual + +## Priviledge Management +IoTDB provides users with priviledge management operations, so as to ensure data security. + +We will show you basic user priviledge management operations through the following specific examples. Detailed SQL syntax and usage details can be found in [Chapter5.SQL Documentation](chap5). At the same time, in the JAVA programming environment, you can use the [Java JDBC](Java-api-page,commingsoon) to execute priviledge management statements in a single or batch mode. + +### Basic Concepts +#### User +The user is the legal user of the database. A user corresponds to a unique username and has a password as a means of authentication. Before using a database, a person must first provide a legitimate username and password to make himself/herself a user. + +#### Priviledge +The database provides a variety of operations, and not all users can perform all operations. If a user can perform an operation, the user is said to have the priviledge to perform the operation. Priviledges can be divided into data management priviledge (such as adding, deleting and modifying data) and authority management priviledge (such as creation and deletion of users and roles, granting and revoking of priviledges, etc.). Data management priviledge often needs a path to limit its effective range, which is a subtree rooted at the path's corresponding node. + +#### Role +A role is a set of priviledges and has a unique role name as an identifier. A user usually corresponds to a real identity (such as a traffic dispatcher), while a real identity may correspond to multiple users. These users with the same real identity tend to have the same priviledges. Roles are abstractions that can unify the management of such priviledges. + +#### Default User +There is a default user in IoTDB after the initial installation: root, and the default password is root. This user is an administrator user, who cannot be deleted and has all the priviledges. Neither can new priviledges be granted to the root user nor can priviledges owned by the root user be deleted. + +### Priviledge Management Operation Examples +According to the [sample data](material-sampledata), the sample data of IoTDB may belong to different power generation groups such as ln, sgcc, etc. Different power generation groups do not want others to obtain their own database data, so we need to have data priviledge isolated at the group layer. + +#### Create User + +We can create two users for ln and sgcc groups, named ln\_write\_user and sgcc\_write\_user, with both passwords being write\_pwd. The SQL statement is: + +``` +CREATE USER ln_write_user write_pwd +CREATE USER sgcc_write_user write_pwd +``` +Then use the following SQL statement to show the user: + +``` +LIST USER +``` +As can be seen from the result shown below, the two users have been created: + +
+ +#### Grant User Priviledge +At this point, although two users have been created, they do not have any priviledges, so they can not operate on the database. For example, we use ln_write_user to write data in the database, the SQL statement is: + +``` +INSERT INTO root.ln.wf01.wt01(timestamp,status) values(1509465600000,true) +``` +The SQL statement will not be executed and the corresponding error prompt is given as follows: + +
+ +Now, we grant the two users write priviledges to the corresponding storage groups, and try to write data again. The SQL statement is: + +``` +GRANT USER ln_write_user PRIVILEGES 'INSERT_TIMESERIES' on root.ln +GRANT USER sgcc_write_user PRIVILEGES 'INSERT_TIMESERIES' on root.sgcc +INSERT INTO root.ln.wf01.wt01(timestamp, status) values(1509465600000, true) +``` +The execution result is as follows: +
+ +### Other Instructions +#### The Relationship among Users, Priviledges and Roles + +A Role is a set of priviledges, and priviledges and roles are both attributes of users. That is, a role can have several priviledges and a user can have several roles and priviledges (called the user's own priviledges). + +At present, there is no conflicting priviledge in IoTDB, so the real priviledges of a user is the union of the user's own priviledges and the priviledges of the user's roles. That is to say, to determine whether a user can perform an operation, it depends on whether one of the user's own priviledges or the priviledges of the user's roles permits the operation. The user's own priviledges and priviledges of the user's roles may overlap, but it does not matter. + +It should be noted that if users have a priviledge (corresponding to operation A) themselves and their roles contain the same priviledge, then revoking the priviledge from the users themselves alone can not prohibit the users from performing operation A, since it is necessary to revoke the priviledge from the role, or revoke the role from the user. Similarly, revoking the priviledge from the users's roles alone can not prohibit the users from performing operation A. + +At the same time, changes to roles are immediately reflected on all users who own the roles. For example, adding certain priviledges to roles will immediately give all users who own the roles corresponding priviledges, and deleting certain priviledges will also deprive the corresponding users of the priviledges (unless the users themselves have the priviledges). + +#### List of Priviledges Included in the System + +
**Table 3-8 List of Priviledges Included in the System** + +|Priviledge Name|Interpretation| +|:---|:---| +|SET\_STORAGE\_GROUP|create timeseries; set storage groups; path dependent| +|INSERT\_TIMESERIES|insert data; path dependent| +|UPDATE\_TIMESERIES|update data; path dependent| +|READ\_TIMESERIES|query data; path dependent| +|DELETE\_TIMESERIES|delete data or timeseries; path dependent| +|CREATE\_USER|create users; path independent| +|DELETE\_USER|delete users; path independent| +|MODIFY\_PASSWORD|modify passwords for all users; path independent; (Those who do not have this priviledge can still change their own asswords. )| +|LIST\_USER|list all users; list a user's priviledges; list a user's roles with three kinds of operation priviledges; path independent| +|GRANT\_USER\_PRIVILEGE|grant user priviledges; path independent| +|REVOKE\_USER\_PRIVILEGE|revoke user priviledges; path independent| +|GRANT\_USER\_ROLE|grant user roles; path independent| +|REVOKE\_USER\_ROLE|revoke user roles; path independent| +|CREATE\_ROLE|create roles; path independent| +|DELETE\_ROLE|delete roles; path independent| +|LIST\_ROLE|list all roles; list the priviledges of a role; list the three kinds of operation priviledges of all users owning a role; path independent| +|GRANT\_ROLE\_PRIVILEGE|grant role priviledges; path independent| +|REVOKE\_ROLE\_PRIVILEGE|revoke role priviledges; path independent| +
+ +#### Username Restrictions +IoTDB specifies that the character length of a username should not be less than 4, and the username cannot contain spaces. +#### Password Restrictions +IoTDB specifies that the character length of a password should not be less than 4, and the password cannot contain spaces. The password is encrypted with MD5. +#### Role Name Restrictions +IoTDB specifies that the character length of a role name should not be less than 4, and the role name cannot contain spaces. diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management.md deleted file mode 100644 index ab046922f9..0000000000 --- a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management.md +++ /dev/null @@ -1,1019 +0,0 @@ - - - - -- [Chapter4: Deployment and Management](#chapter4-deployment-and-management) - - [Deployment](#deployment) - - [Prerequisites](#prerequisites) - - [Installation from binary files](#installation-from--binary-files) - - [Installation from source code](#installation-from-source-code) - - [Installation from Docker (dockerfile)](#installation-by-docker-dockerfile) - - [Configuration](#configuration) - - [IoTDB Environment Configuration File](#iotdb-environment-configuration-file) - - [IoTDB System Configuration File](#iotdb-system-configuration-file) - - [File Layer](#file-layer) - - [Engine Layer](#engine-layer) - - [System Monitor](#system-monitor) - - [System Status Monitoring](#system-status-monitoring) - - [JMX MBean Monitoring](#jmx-mbean-monitoring) - - [MBean Monitor Attributes List](#mbean-monitor-attributes-list) - - [Data Status Monitoring](#data-status-monitoring) - - [Writing Data Monitor](#writing-data-monitor) - - [Example](#example) - - [File Size Monitor](#file-size-monitor) - - [System log](#system-log) - - [Dynamic System Log Configuration](#dynamic-system-log-configuration) - - [Connect JMX](#connect-jmx) - - [Interface Instruction](#interface-instruction) - - [Data Management](#data-management) - - [Data Files](#data-files) - - [System Files](#system-files) - - [Pre-write Log Files](#pre-write-log-files) - - [Example of Setting Data storage Directory](#example-of-setting-data-storage-directory) - - -# Chapter4: Deployment and Management - -## Deployment - -IoTDB provides you two installation methods, you can refer to the following suggestions, choose one of them: - -* Installation from binary files. Download the binary files from the official website. This is the recommended method, in which you will get a binary released package which is out-of-the-box. -* Installation from source code. If you need to modify the code yourself, you can use this method. - -### Prerequisites - -To install and use IoTDB, you need to have: - -1. Java >= 1.8 (Please make sure the environment path has been set) -2. Maven >= 3.0 (If you want to compile and install IoTDB from source code) -3. TsFile >= 0.7.0 (TsFile Github page: [https://github.com/thulab/tsfile](https://github.com/thulab/tsfile)) -4. IoTDB-JDBC >= 0.7.0 (IoTDB-JDBC Github page: [https://github.com/thulab/iotdb-jdbc](https://github.com/thulab/iotdb-jdbc)) - -TODO: TsFile and IoTDB-JDBC dependencies will be removed after the project reconstruct. - -### Installation from binary files - -IoTDB provides you binary files which contains all the necessary components for the IoTDB system to run. You can get them on our website [http://tsfile.org/download](http://tsfile.org/download). - -``` -NOTE: -iotdb-.tar.gz # For Linux or MacOS -iotdb-.zip # For Windows -``` - -After downloading, you can extract the IoTDB tarball using the following operations: - -``` -Shell > uzip iotdb-.zip # For Windows -Shell > tar -zxf iotdb-.tar.gz # For Linux or MacOS -``` - -The IoTDB project will be at the subfolder named iotdb. The folder will include the following contents: - -``` -iotdb/ <-- root path -| -+- bin/ <-- script files -| -+- conf/ <-- configuration files -| -+- lib/ <-- project dependencies -| -+- LICENSE <-- LICENSE -``` - -### Installation from source code - -Use git to get IoTDB source code: - -``` -Shell > git clone https://github.com/apache/incubator-iotdb.git -``` - -Or: - -``` -Shell > git clone git@github.com:apache/incubator-iotdb.git -``` - -Now suppose your directory is like this: - -``` -> pwd -/workspace/incubator-iotdb - -> ls -l -incubator-iotdb/ <-- root path -| -+- iotdb/ -| -+- jdbc/ -| -+- iotdb-cli/ -| -... -| -+- pom.xml -``` - -Let $IOTDB_HOME = /workspace/incubator-iotdb/iotdb/iotdb/ -Let $IOTDB_CLI_HOME = /workspace/incubator-iotdb/iotdb-cli/cli/ - -Note: -* if `IOTDB_HOME` is not explicitly assigned, -then by default `IOTDB_HOME` is the direct parent directory of `bin/start-server.sh` on Unix/OS X -(or that of `bin\start-server.bat` on Windows). - -* if `IOTDB_CLI_HOME` is not explicitly assigned, -then by default `IOTDB_CLI_HOME` is the direct parent directory of `bin/start-client.sh` on -Unix/OS X (or that of `bin\start-client.bat` on Windows). - -If you are not the first time that building IoTDB, remember deleting the following files: - -``` -> rm -rf $IOTDB_HOME/data/ -> rm -rf $IOTDB_HOME/lib/ -``` - -Then under the root path of incubator-iotdb, you can build IoTDB using Maven: - -``` -> pwd -/workspace/incubator-iotdb - -> mvn clean package -pl iotdb -am -Dmaven.test.skip=true -``` - -If successful, you will see the the following text in the terminal: - -``` -[INFO] ------------------------------------------------------------------------ -[INFO] Reactor Summary: -[INFO] -[INFO] IoTDB Root ......................................... SUCCESS [ 7.020 s] -[INFO] TsFile ............................................. SUCCESS [ 10.486 s] -[INFO] Service-rpc ........................................ SUCCESS [ 3.717 s] -[INFO] IoTDB Jdbc ......................................... SUCCESS [ 3.076 s] -[INFO] IoTDB .............................................. SUCCESS [ 8.258 s] -[INFO] ------------------------------------------------------------------------ -[INFO] BUILD SUCCESS -[INFO] ------------------------------------------------------------------------ -``` - -Otherwise, you may need to check the error statements and fix the problems. - -After building, the IoTDB project will be at the subfolder named iotdb. The folder will include the following contents: - -``` -$IOTDB_HOME/ -| -+- bin/ <-- script files -| -+- conf/ <-- configuration files -| -+- lib/ <-- project dependencies -``` - - - -### Installation by Docker (Dockerfile) - -You can build and run a IoTDB docker image by following the guide of [Deployment by Docker](#build-and-use-iotdb-by-dockerfile) - - -## Configuration - - -Before starting to use IoTDB, you need to config the configuration files first. For your convenience, we have already set the default config in the files. - -In total, we provide users three kinds of configurations module: - -* environment configuration file (iotdb-env.bat, iotdb-env.sh). The default configuration file for the environment configuration item. Users can configure the relevant system configuration items of JAVA-JVM in the file. -* system configuration file (tsfile-format.properties, iotdb-engine.properties). - * tsfile-format.properties: The default configuration file for the IoTDB file layer configuration item. Users can configure the information about the TsFile, such as the data size written to the disk per time(group\_size\_in_byte). - * iotdb-engine.properties: The default configuration file for the IoTDB engine layer configuration item. Users can configure the IoTDB engine related parameters in the file, such as JDBC service listening port (rpc\_port), unsequence data storage directory (unsequence\_data\_dir), etc. -* log configuration file (logback.xml) - -The configuration files of the three configuration items are located in the IoTDB installation directory: $IOTDB_HOME/conf folder. - -### IoTDB Environment Configuration File - -The environment configuration file is mainly used to configure the Java environment related parameters when IoTDB Server is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the IoTDB Server starts. Users can view the contents of the environment configuration file by viewing the iotdb-env.sh (or iotdb-env.bat) file. - -The detail of each variables are as follows: - -* JMX\_LOCAL - -|Name|JMX\_LOCAL| -|:---:|:---| -|Description|JMX monitoring mode, configured as yes to allow only local monitoring, no to allow remote monitoring| -|Type|Enum String: "yes", "no"| -|Default|yes| -|Effective|After restart system| - - -* JMX\_PORT - -|Name|JMX\_PORT| -|:---:|:---| -|Description|JMX listening port. Please confirm that the port is not a system reserved port and is not occupied| -|Type|Short Int: [0,65535]| -|Default|31999| -|Effective|After restart system| - -* MAX\_HEAP\_SIZE - -|Name|MAX\_HEAP\_SIZE| -|:---:|:---| -|Description|The maximum heap memory size that IoTDB can use at startup.| -|Type|String| -|Default| On Linux or MacOS, the default is one quarter of the memory. On Windows, the default value for 32-bit systems is 512M, and the default for 64-bit systems is 2G.| -|Effective|After restart system| - -* HEAP\_NEWSIZE - -|Name|HEAP\_NEWSIZE| -|:---:|:---| -|Description|The minimum heap memory size that IoTDB can use at startup.| -|Type|String| -|Default| On Linux or MacOS, the default is min{cores * 100M, one quarter of MAX\_HEAP\_SIZE}. On Windows, the default value for 32-bit systems is 512M, and the default for 64-bit systems is 2G.| -|Effective|After restart system| - -### IoTDB System Configuration File - -#### File Layer - -* compressor - -|Name|compressor| -|:---:|:---| -|Description|Data compression method| -|Type|Enum String : “UNCOMPRESSED”, “SNAPPY”| -|Default| UNCOMPRESSED | -|Effective|Immediately| - -* group\_size\_in\_byte - -|Name|group\_size\_in\_byte| -|:---:|:---| -|Description|The data size written to the disk per time| -|Type|Int32| -|Default| 134217728 | -|Effective|Immediately| - -* page\_size\_in\_byte - -|Name| page\_size\_in\_byte | -|:---:|:---| -|Description|The maximum size of a single page written in memory when each column in memory is written (in bytes)| -|Type|Int32| -|Default| 134217728 | -|Effective|Immediately| - -* max\_number\_of\_points\_in\_page - -|Name| max\_number\_of\_points\_in\_page | -|:---:|:---| -|Description|The maximum number of data points (timestamps - valued groups) contained in a page| -|Type|Int32| -|Default| 1048576 | -|Effective|Immediately| - -* max\_string\_length - -|Name| max\_string\_length | -|:---:|:---| -|Description|The maximum length of a single string (number of character)| -|Type|Int32| -|Default| 128 | -|Effective|Immediately| - -* time\_series\_data\_type - -|Name| time\_series\_data\_type | -|:---:|:---| -|Description|Timestamp data type| -|Type|Enum String: "INT32", "INT64"| -|Default| Int64 | -|Effective|Immediately| - -* time\_encoder - -|Name| time\_encoder | -|:---:|:---| -|Description| Encoding type of time column| -|Type|Enum String: “TS_2DIFF”,“PLAIN”,“RLE”| -|Default| TS_2DIFF | -|Effective|Immediately| - -* value\_encoder - -|Name| value\_encoder | -|:---:|:---| -|Description| Encoding type of value column| -|Type|Enum String: “TS_2DIFF”,“PLAIN”,“RLE”| -|Default| PLAIN | -|Effective|Immediately| - -* float_precision - -|Name| float_precision | -|:---:|:---| -|Description| The precision of the floating point number.(The number of digits after the decimal point) | -|Type|Int32| -|Default| The default is 2 digits. Note: The 32-bit floating point number has a decimal precision of 7 bits, and the 64-bit floating point number has a decimal precision of 15 bits. If the setting is out of the range, it will have no practical significance. | -|Effective|Immediately| - -#### Engine Layer - -* rpc_address - -|Name| rpc_address | -|:---:|:---| -|Description| The jdbc service listens on the address.| -|Type|String| -|Default| "0.0.0.0" | -|Effective|After restart system| - -* rpc_port - -|Name| rpc_port | -|:---:|:---| -|Description| The jdbc service listens on the port. Please confirm that the port is not a system reserved port and is not occupied.| -|Type|Short Int : [0,65535]| -|Default| 6667 | -|Effective|After restart system| - -* time_zone - -|Name| time_zone | -|:---:|:---| -|Description| The time zone in which the server is located, the default is Beijing time (+8) | -|Type|Time Zone String| -|Default| +08:00 | -|Effective|After restart system| - -* base\_dir - -|Name| base\_dir | -|:---:|:---| -|Description| The IoTDB system folder. It is recommended to use an absolute path. | -|Type|String| -|Default| data | -|Effective|After restart system| - -* data_dirs - -|Name| data_dirs | -|:---:|:---| -|Description| The directories of data files. Multiple directories are separated by comma. See the [mult\_dir\_strategy](chapter4,multdirstrategy) configuration item for data distribution strategy. The starting directory of the relative path is related to the operating system. It is recommended to use an absolute path. If the path does not exist, the system will automatically create it.| -|Type|String[]| -|Default| data/data | -|Effective|After restart system| - -* wal\_dir - -|Name| wal\_dir | -|:---:|:---| -|Description| Write Ahead Log storage path. It is recommended to use an absolute path. | -|Type|String| -|Default| data/wal | -|Effective|After restart system| - -* enable_wal - -|Name| enable_wal | -|:---:|:---| -|Description| Whether to enable the pre-write log. The default value is true(enabled), and false means closed. | -|Type|Bool| -|Default| true | -|Effective|After restart system| - -* mult\_dir\_strategy - -|Name| mult\_dir\_strategy | -|:---:|:---| -|Description| IoTDB's strategy for selecting directories for TsFile in tsfile_dir. You can use a simple class name or a full name of the class. The system provides the following three strategies:
1. SequenceStrategy: IoTDB selects the directory from tsfile\_dir in order, traverses all the directories in tsfile\_dir in turn, and keeps counting;
2. MaxDiskUsableSpaceFirstStrategy: IoTDB first selects the directory with the largest free disk space in tsfile\_dir;
3. MinFolderOccupiedSpaceFirstStrategy: IoTDB prefers the directory with the least space used in tsfile\_dir;
4. (user-defined policy)
You can complete a user-defined policy in the following ways:
1. Inherit the cn.edu.tsinghua.iotdb.conf.directories.strategy.DirectoryStrategy class and implement its own Strategy method;
2. Fill in the configuration class with the full class name of the implemented class (package name plus class name, UserDfineStrategyPackage);
3. Add the jar file to the project. | -|Type|String| -|Default| MaxDiskUsableSpaceFirstStrategy | -|Effective|After restart system| - -* tsfile\_size\_threshold - -|Name| tsfile\_size\_threshold | -|:---:|:---| -|Description| When a TsFile size on the disk exceeds this threshold, the TsFile is closed and open a new TsFile to accept data writes. The unit is byte and the default value is 2G.| -|Type| Int64 | -|Default| 536870912 | -|Effective|After restart system| - -* flush\_wal\_threshold - -|Name| flush\_wal\_threshold | -|:---:|:---| -|Description| After the WAL reaches this value, it is flushed to disk, and it is possible to lose at most flush_wal_threshold operations. | -|Type|Int32| -|Default| 10000 | -|Effective|After restart system| - -* flush\_wal\_period\_in\_ms - -|Name| force\_wal\_period\_in\_ms | -|:---:|:---| -|Description| The period during which the log is periodically forced to flush to disk(in milliseconds) | -|Type|Int32| -|Default| 10 | -|Effective|After restart system| - -* fetch_size - -|Name| fetch_size | -|:---:|:---| -|Description| The amount of data read each time in batch (the number of data strips, that is, the number of different timestamps.) | -|Type|Int32| -|Default| 10000 | -|Effective|After restart system| - -* merge\_concurrent\_threads - -|Name| merge\_concurrent\_threads | -|:---:|:---| -|Description| THe max threads which can be used when unsequence data is merged. The larger it is, the more IO and CPU cost. The smaller the value, the more the disk is occupied when the unsequence data is too large, the reading will be slower. | -|Type|Int32| -|Default| 10 | -|Effective|After restart system| - -* enable\_stat\_monitor - -|Name| enable\_stat\_monitor | -|:---:|:---| -|Description| Whether to enable background statistics| -|Type| Boolean | -|Default| true | -|Effective|After restart system| - -* back\_loop\_period_in_second - -|Name| back\_loop\_period\_in\_second | -|:---:|:---| -|Description| The frequency at which the system statistic module triggers(in seconds). | -|Type|Int32| -|Default| 5 | -|Effective|After restart system| - -* concurrent\_flush\_thread - -|Name| concurrent\_flush\_thread | -|:---:|:---| -|Description| The thread number used to perform the operation when IoTDB writes data in memory to disk. If the value is less than or equal to 0, then the number of CPU cores installed on the machine is used. The default is 0.| -|Type| Int32 | -|Default| 0 | -|Effective|After restart system| - -* stat\_monitor\_detect\_freq\_sec - -|Name| concurrent\_flush\_thread | -|:---:|:---| -|Description| The time interval which the system check whether the current record statistic time range exceeds stat_monitor_retain_interval every time (in seconds) and perform regular cleaning| -|Type| Int32 | -|Default|600 | -|Effective|After restart system| - -* stat\_monitor\_retain\_interval\_sec - -|Name| stat\_monitor\_retain\_interval\_sec | -|:---:|:---| -|Description| The retention time of system statistics data(in seconds). Statistics data over the retention time range will be cleaned regularly.| -|Type| Int32 | -|Default|600 | -|Effective|After restart system| - -## System Monitor - -Currently, IoTDB provides users to use Java's JConsole tool to monitor system status or use IoTDB's open API to check data status. - -### System Status Monitoring - -After starting JConsole tool and connecting to IoTDB server, you will have a basic look at IoTDB system status(CPU Occupation, in-memory information, etc.). See [official documentation](https://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html) for more informations. - -#### JMX MBean Monitoring -By using JConsole tool and connecting with JMX you can see some system statistics and parameters. -This section describes how to use the JConsole ```Mbean``` tab to monitor the number of files opened by the IoTDB service process, the size of the data file, and so on. Once connected to JMX, you can find the ```MBean``` named ```org.apache.iotdb.service``` through the ```MBeans``` tab, as shown in the following Figure. - - - -There are several attributes under Monitor, including the numbers of files opened in different folders, the data file size statistics and the values of some system parameters. By double-clicking the value corresponding to an attribute it can also display a line chart of that attribute. In particular, all the opened file count statistics are currently only supported on ```MacOS``` and most ```Linux``` distro except ```CentOS```. For the OS not supported these statistics will return ```-2```. See the following section for specific introduction of the Monitor attributes. - -##### MBean Monitor Attributes List - -* DataSizeInByte - -|Name| DataSizeInByte | -|:---:|:---| -|Description| The total size of data file.| -|Unit| Byte | -|Type| Long | - -* FileNodeNum - -|Name| FileNodeNum | -|:---:|:---| -|Description| The count number of FileNode. (Currently not supported)| -|Type| Long | - -* OverflowCacheSize - -|Name| OverflowCacheSize | -|:---:|:---| -|Description| The size of out-of-order data cache. (Currently not supported)| -|Unit| Byte | -|Type| Long | - -* BufferWriteCacheSize - -|Name| BufferWriteCacheSize | -|:---:|:---| -|Description| The size of BufferWriter cache. (Currently not supported)| -|Unit| Byte | -|Type| Long | - -* BaseDirectory - -|Name| BaseDirectory | -|:---:|:---| -|Description| The absolute directory of data file. | -|Type| String | - -* WriteAheadLogStatus - -|Name| WriteAheadLogStatus | -|:---:|:---| -|Description| The status of write-ahead-log (WAL). ```True``` means WAL is enabled. | -|Type| Boolean | - -* TotalOpenFileNum - -|Name| TotalOpenFileNum | -|:---:|:---| -|Description| All the opened file number of IoTDB server process. | -|Type| Int | - -* DeltaOpenFileNum - -|Name| DeltaOpenFileNum | -|:---:|:---| -|Description| The opened TsFile file number of IoTDB server process. | -|Default Directory| /data/data/settled | -|Type| Int | - -* WalOpenFileNum - -|Name| WalOpenFileNum | -|:---:|:---| -|Description| The opened write-ahead-log file number of IoTDB server process. | -|Default Directory| /data/wal | -|Type| Int | - -* MetadataOpenFileNum - -|Name| MetadataOpenFileNum | -|:---:|:---| -|Description| The opened meta-data file number of IoTDB server process. | -|Default Directory| /data/system/schema | -|Type| Int | - -* DigestOpenFileNum - -|Name| DigestOpenFileNum | -|:---:|:---| -|Description| The opened info file number of IoTDB server process. | -|Default Directory| /data/system/info | -|Type| Int | - -* SocketOpenFileNum - -|Name| SocketOpenFileNum | -|:---:|:---| -|Description| The Socket link (TCP or UDP) number of the operation system. | -|Type| Int | - -* MergePeriodInSecond - -|Name| MergePeriodInSecond | -|:---:|:---| -|Description| The interval at which the IoTDB service process periodically triggers the merge process. | -|Unit| Second | -|Type| Long | - -* ClosePeriodInSecond - -|Name| ClosePeriodInSecond | -|:---:|:---| -|Description| The interval at which the IoTDB service process periodically flushes memory data to disk. | -|Unit| Second | -|Type| Long | - -### Data Status Monitoring - -This module is the statistical monitoring method provided by IoTDB for users to store data information. We will record the statistical data in the system and store it in the database. The current 0.7.0 version of IoTDB provides statistics for writing data. - -The user can choose to enable or disable the data statistics monitoring function (set the `enable_stat_monitor` item in the configuration file, see [Engine Layer](chapter4,enginelayer) for details). - -#### Writing Data Monitor - -The current statistics of writing data by the system can be divided into two major modules: **Global Writing Data Statistics** and **Storage Group Writing Data Statistics**. **Global Writing Data Statistics** records the point number written by the user and the number of requests. **Storage Group Writing Data Statistics** records data of a certain storage group. - -The system defaults to collect data every 5 seconds, and writes the statistics to the IoTDB and stores them in a system-specified locate. (If you need to change the statistic frequency, you can set The `back_loop_period_sec entry` in the configuration file, see Section [Engine Layer](chapter4,enginelayer) for details). After the system is refreshed or restarted, IoTDB does not recover the statistics, and the statistics data will restart from zero. - -In order to avoid the excessive use of statistical information, we add a mechanism to periodically clear invalid data for statistical information. The system will delete invalid data at regular intervals. The user can set the trigger frequency (`stat_monitor_retain_interval_sec`, default is 600s, see section [Engine Layer](chapter4,enginelayer) for details) to set the frequency of deleting data. By setting the valid data duration (`stat_monitor_detect_freq_sec entry`, the default is 600s, see section [Engine Layer](chapter4,enginelayer) for details) to set the time period of valid data, that is, the data within the time of the clear operation trigger time is stat_monitor_detect_freq_sec is valid data. In order to ensure the stability of the system, it is not allowed to delete the statistics frequently. Therefore, if the configuration parameter time is less than the default value (600s), the system will abort the configuration parameter and uses the default parameter. - -It's convenient for you to use `select` clause to get the writing data statistics the same as other timeseires. - -Here are the writing data statistics: - -* TOTAL_POINTS (GLOABAL) - -|Name| TOTAL\_POINTS | -|:---:|:---| -|Description| Calculate the global writing points number.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.global.TOTAL\_POINTS | -|Reset After Restarting System| yes | -|Example| select TOTAL_POINTS from root.stats.write.global| - -* TOTAL\_REQ\_SUCCESS (GLOABAL) - -|Name| TOTAL\_REQ\_SUCCESS | -|:---:|:---| -|Description| Calculate the global successful requests number.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.global.TOTAL\_REQ\_SUCCESS | -|Reset After Restarting System| yes | -|Example| select TOTAL\_REQ\_SUCCESS from root.stats.write.global| - -* TOTAL\_REQ\_FAIL (GLOABAL) - -|Name| TOTAL\_REQ\_FAIL | -|:---:|:---| -|Description| Calculate the global failed requests number.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.global.TOTAL\_REQ\_FAIL | -|Reset After Restarting System| yes | -|Example| select TOTAL\_REQ\_FAIL from root.stats.write.global| - - -* TOTAL\_POINTS\_FAIL (GLOABAL) - -|Name| TOTAL\_POINTS\_FAIL | -|:---:|:---| -|Description| Calculate the global failed writing points number.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.global.TOTAL\_POINTS\_FAIL | -|Reset After Restarting System| yes | -|Example| select TOTAL\_POINTS\_FAIL from root.stats.write.global| - - -* TOTAL\_POINTS\_SUCCESS (GLOABAL) - -|Name| TOTAL\_POINTS\_SUCCESS | -|:---:|:---| -|Description| Calculate the c.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.global.TOTAL\_POINTS\_SUCCESS | -|Reset After Restarting System| yes | -|Example| select TOTAL\_POINTS\_SUCCESS from root.stats.write.global| - -* TOTAL\_REQ\_SUCCESS (STORAGE GROUP) - -|Name| TOTAL\_REQ\_SUCCESS | -|:---:|:---| -|Description| Calculate the successful requests number for specific storage group| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.\.TOTAL\_REQ\_SUCCESS | -|Reset After Restarting System| yes | -|Example| select TOTAL\_REQ\_SUCCESS from root.stats.write.\| - -* TOTAL\_REQ\_FAIL (STORAGE GROUP) - -|Name| TOTAL\_REQ\_FAIL | -|:---:|:---| -|Description| Calculate the fail requests number for specific storage group| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.\.TOTAL\_REQ\_FAIL | -|Reset After Restarting System| yes | -|Example| select TOTAL\_REQ\_FAIL from root.stats.write.\| - - -* TOTAL\_POINTS\_SUCCESS (STORAGE GROUP) - -|Name| TOTAL\_POINTS\_SUCCESS | -|:---:|:---| -|Description| Calculate the successful writing points number for specific storage group.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.\.TOTAL\_POINTS\_SUCCESS | -|Reset After Restarting System| yes | -|Example| select TOTAL\_POINTS\_SUCCESS from root.stats.write.\| - - -* TOTAL\_POINTS\_FAIL (STORAGE GROUP) - -|Name| TOTAL\_POINTS\_FAIL | -|:---:|:---| -|Description| Calculate the fail writing points number for specific storage group.| -|Type| Writing data statistics | -|Timeseries Name| root.stats.write.\.TOTAL\_POINTS\_FAIL | -|Reset After Restarting System| yes | -|Example| select TOTAL\_POINTS\_FAIL from root.stats.write.\| - -> Note: -> -> \ should be replaced by real storage group name, and the '.' in storage group need to be replaced by '_'. For example, the storage group name is 'root.a.b', when using in the statistics, it will change to 'root\_a\_b' - -##### Example - -Here we give some example of using writing data statistics. - -If you want to know the global successful writing points number, you can use `select` clause to query it's value. The query statement is like this: - -``` -select TOTAL_POINTS_SUCCESS from root.stats.write.global -``` - -If you want to know the successfule writing points number of root.ln (storage group), here is the query statement: - -``` -select TOTAL_POINTS_SUCCESS from root.stats.write.root_ln -``` - -If you want to know the current timeseries point in the system, you can use `MAX_VALUE` function to query. Here is the query statement: - -``` -select MAX_VALUE(TOTAL_POINTS_SUCCESS) from root.stats.write.root_ln -``` - -#### File Size Monitor - -Sometimes we are concerned about how the data file size of IoTDB is changing, maybe to help calculate how much disk space is left or the data ingestion speed. The File Size Monitor provides several statistics to show how different types of file-sizes change. - -The file size monitor defaults to collect file size data every 5 seconds using the same shared parameter ```back_loop_period_sec```, - -Unlike Writing Data Monitor, currently File Size Monitor will not delete statistic data at regular intervals. - -You can also use `select` clause to get the file size statistics like other time series. - -Here are the file size statistics: - -* DATA - -|Name| DATA | -|:---:|:---| -|Description| Calculate the sum of all the files's sizes under the data directory (```data/data``` by default) in byte.| -|Type| File size statistics | -|Timeseries Name| root.stats.file\_size.DATA | -|Reset After Restarting System| No | -|Example| select DATA from root.stats.file\_size.DATA| - -* SETTLED - -|Name| SETTLED | -|:---:|:---| -|Description| Calculate the sum of all the ```TsFile``` size (under ```data/data/settled``` by default) in byte. If there are multiple ```TsFile``` directories like ```{data/data/settled1, data/data/settled2}```, this statistic is the sum of their size.| -|Type| File size statistics | -|Timeseries Name| root.stats.file\_size.SETTLED | -|Reset After Restarting System| No | -|Example| select SETTLED from root.stats.file\_size.SETTLED| - -* OVERFLOW - -|Name| OVERFLOW | -|:---:|:---| -|Description| Calculate the sum of all the ```out-of-order data file``` size (under ```data/data/unsequence``` by default) in byte.| -|Type| File size statistics | -|Timeseries Name| root.stats.file\_size.OVERFLOW | -|Reset After Restarting System| No | -|Example| select OVERFLOW from root.stats.file\_size.OVERFLOW| - - -* WAL - -|Name| WAL | -|:---:|:---| -|Description| Calculate the sum of all the ```Write-Ahead-Log file``` size (under ```data/wal``` by default) in byte.| -|Type| File size statistics | -|Timeseries Name| root.stats.file\_size.WAL | -|Reset After Restarting System| No | -|Example| select WAL from root.stats.file\_size.WAL| - - -* INFO - -|Name| INFO| -|:---:|:---| -|Description| Calculate the sum of all the ```.restore```, etc. file size (under ```data/system/info```) in byte.| -|Type| File size statistics | -|Timeseries Name| root.stats.file\_size.INFO | -|Reset After Restarting System| No | -|Example| select INFO from root.stats.file\_size.INFO| - -* SCHEMA - -|Name| SCHEMA | -|:---:|:---| -|Description| Calculate the sum of all the ```metadata file``` size (under ```data/system/metadata```) in byte.| -|Type| File size statistics | -|Timeseries Name| root.stats.file\_size.SCHEMA | -|Reset After Restarting System| No | -|Example| select SCHEMA from root.stats.file\_size.SCHEMA| - - - -## System log - -IoTDB allows users to configure IoTDB system logs (such as log output level) by modifying the log configuration file. The default location of the system log configuration file is in \$IOTDB_HOME/conf folder. - -The default log configuration file is named logback.xml. The user can modify the configuration of the system running log by adding or changing the xml tree node parameters. It should be noted that the configuration of the system log using the log configuration file does not take effect immediately after the modification, instead, it will take effect after restarting the system. The usage of logback.xml is just as usual. - -At the same time, in order to facilitate the debugging of the system by the developers and DBAs, we provide several JMX interface to dynamically modify the log configuration, and configure the Log module of the system in real time without restarting the system. For detailed usage, see [Dynamic System Log Configuration](Chap4dynamicsystemlog) section. - -### Dynamic System Log Configuration - -#### Connect JMX - -Here we use JConsole to connect with JMX. - -Start the JConsole, establish a new JMX connection with the IoTDB Server (you can select the local process or input the IP and PORT for remote connection, the default operation port of the IoTDB JMX service is 31999). Fig 4.1 shows the connection GUI of JConsole. - - - -After connected, click `MBean` and find `ch.qos.logback.classic.default.ch.qos.logback.classic.jmx.JMXConfigurator`(As shown in fig 4.2). - - -In the JMXConfigurator Window, there are 6 operation provided for you, as shown in fig 4.3. You can use there interface to perform operation. - - - -#### Interface Instruction - -* reloadDefaultConfiguration - -This method is to reload the default logback configuration file. The user can modify the default configuration file first, and then call this method to reload the modified configuration file into the system to take effect. - -* reloadByFileName - -This method loads a logback configuration file with the specified path and name, and then makes it take effect. This method accepts a parameter of type String named p1, which is the path to the configuration file that needs to be specified for loading. - -* getLoggerEffectiveLevel - -This method is to obtain the current log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level currently in effect for the specified Logger. - -* getLoggerLevel - -This method is to obtain the log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level of the specified Logger. -It should be noted that the difference between this method and the `getLoggerEffectiveLevel` method is that the method returns the log level that the specified Logger is set in the configuration file. If the user does not set the log level for the Logger. , then return empty. According to Logre's log-level inheritance mechanism, if a Logger is not displayed to set the log level, it will inherit the log level settings from its nearest ancestor. At this point, calling the `getLoggerEffectiveLevel` method will return the log level in which the Logger is in effect; calling the methods described in this section will return null. - -## Data Management - -In IoTDB, there are many kinds of data needed to be storage. In this section, we will introduce IoTDB's data storage strategy in order to give you an intuitive understanding of IoTDB's data management. - -The data that IoTDB stores is divided into three categories, namely data files, system files, and pre-write log files. - -### Data Files - -Data files store all the data that the user wrote to IoTDB, which contains TsFile and other files. TsFile storage directory can be configured with the `tsfile_dir` configuration item (see [file layer](Chap4filelayer) for details). Other files can be configured through [data_dir](chap4datadir) configuration item (see [Engine Layer](chapter4,enginelayer) for details). - -In order to better support users' storage requirements such as disk space expansion, IoTDB supports multiple file directorys storage methods for TsFile storage configuration. Users can set multiple storage paths as data storage locations( see [tsfile_dir](Chap4,tsfiledir) configuration item), and you can specify or customize the directory selection policy (see [mult_dir_strategy](chapter4,enginelayer) configuration item for details). - -### System Files - -System files include restore files and schema files, which store metadata information of data in IoTDB. It can be configured through the `sys_dir` configuration item (see [System Layer](chapter4,systemlayer) for details). - -### Pre-write Log Files - -Pre-write log files store WAL files. It can be configured through the `wal_dir` configuration item (see [System Layer](chapter4,systemlayer) for details). - -### Example of Setting Data storage Directory - -For a clearer understanding of configuring the data storage directory, we will give an excample in this section. - -All data directory paths involved in storage directory setting are: data_dir, tsfile_dir, mult_dir_strategy, sys_dir, and wal_dir, which refer to data files, stroage strategy, system files, and pre-write log files. You can choose to configure the items you'd like to change, otherwise, you can use the system default configuration item without any operation. - -Here we give an example of a user who configures all five configurations mentioned above. The configuration items are as follow: - -``` -data_dir = D:\\iotdb\\data\\data -tsfile_dir = E:\\iotdb\\data\\data1, data\\data2, F:\\data3 
mult_dir_strategy = MaxDiskUsableSpaceFirstStrategy
sys_dir = data\\system
wal_dir = data - -``` -After setting the configuration, the system will: - -* Save all data files except TsFile in D:\\iotdb\\data\\data -* Save TsFile in E:\\iotdb\\data\\data1, $IOTDB_HOME\\data\\data2 and F:\\data3. And the choosing strategy is `MaxDiskUsableSpaceFirstStrategy`, that is every time data writes to the disk, the system will automatically select a directory with the largest remaining disk space to write data. -* Save system data in $IOTDB_HOME\\data\\system -* Save WAL data in $IOTDB_HOME\\data - -> Note: -> -> If you change directory names in tsfile_dir, the newer name and the older name should be one-to-one correspondence. Also, the files in the older directory needs to be moved to the newer directory. -> -> If you add some directorys in tsfile_dir, IoTDB will add the path automatically. Nothing needs to do by your own. - -For example, modify the tsfile_dir to: - -``` -tsfile_dir = D:\\data4, E:\\data5, F:\\data6 -``` - -You need to move files in E:\iotdb\data\data1 to D:\data4, move files in %IOTDB_HOME%\data\data2 to E:\data5, move files in F:\data3 to F:\data6. In this way, the system will operation normally. - - - - -## Build and use IoTDB by Dockerfile -Now a Dockerfile has been written at ROOT/docker/Dockerfile on the branch enable_docker_image. - -1. You can build a docker image by: -``` -$ docker build -t iotdb:base git://github.com/apache/incubator-iotdb#master:docker -``` -Or: -``` -$ git clone https://github.com/apache/incubator-iotdb -$ cd incubator-iotdb -$ cd docker -$ docker build -t iotdb:base . -``` -Once the docker image has been built locally (the tag is iotdb:base in this example), you are almost done! - -2. create docker volume for data files and logs: -``` -$ docker volume create mydata -$ docker volume create mylogs -``` -3. run a docker container: -```shell -$ docker run -p 6667:6667 -v mydata:/iotdb/data -v mylogs:/iotdb/logs -d iotdb:base /iotdb/bin/start-server.sh -``` -If success, you can run `docker ps`, and get something like the following: -``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -2a68b6944cb5 iotdb:base "/iotdb/bin/start-se…" 4 minutes ago Up 5 minutes 0.0.0.0:6667->6667/tcp laughing_meitner -``` -You can use the above command to get the container ID: -``` -$ docker container ls -``` -suppose the ID is . - -And get the docker IP by: -``` -$ docker inspect --format='{{.NetworkSettings.IPAddress}}' -``` -suppose the IP is . - -4. If you just want to have a try by using iotdb-cli, you can: -``` -$ docker exec -it /bin/bash -$ (now you have enter the container): /cli/bin/start-client.sh -h localhost -p 6667 -u root -pw root -``` - -Or, run a new docker container as the client: -``` -$ docker run -it iotdb:base /cli/bin/start-client.sh -h -p 6667 -u root -pw root -``` -Or, if you have a iotdb-cli locally (e.g., you have compiled the source code by `mvn package`), and suppose your work_dir is cli/bin, then you can just run: -``` -$ start-client.sh -h localhost -p 6667 -u root -pw root -``` -5. If you want to write codes to insert data and query data, please add the following dependence: -```xml - - org.apache.iotdb - iotdb-jdbc - 0.8.0-SNAPSHOT - -``` -Some example about how to use IoTDB with IoTDB-JDBC can be found at: https://github.com/apache/incubator-iotdb/tree/master/jdbc/src/test/java/org/apache/iotdb/jdbc/demo - -(Notice that because we have not published Apache IoTDB version 0.8.0 now, you have to compile the source code by `mvn install -DskipTests` to install the dependence into your local maven repository) - -6. Now enjoy it! diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/1-Deployment.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/1-Deployment.md new file mode 100644 index 0000000000..d12125f18b --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/1-Deployment.md @@ -0,0 +1,169 @@ + + +# Chapter 4: Deployment and Management + +## Deployment + +IoTDB provides you two installation methods, you can refer to the following suggestions, choose one of them: + +* Installation from binary files. Download the binary files from the official website. This is the recommended method, in which you will get a binary released package which is out-of-the-box. +* Installation from source code. If you need to modify the code yourself, you can use this method. + +### Prerequisites + +To install and use IoTDB, you need to have: + +1. Java >= 1.8 (Please make sure the environment path has been set) +2. Maven >= 3.0 (If you want to compile and install IoTDB from source code) +3. TsFile >= 0.7.0 (TsFile Github page: [https://github.com/thulab/tsfile](https://github.com/thulab/tsfile)) +4. IoTDB-JDBC >= 0.7.0 (IoTDB-JDBC Github page: [https://github.com/thulab/iotdb-jdbc](https://github.com/thulab/iotdb-jdbc)) + +TODO: TsFile and IoTDB-JDBC dependencies will be removed after the project reconstruct. + +### Installation from binary files + +IoTDB provides you binary files which contains all the necessary components for the IoTDB system to run. You can get them on our website [http://tsfile.org/download](http://tsfile.org/download). + +``` +NOTE: +iotdb-.tar.gz # For Linux or MacOS +iotdb-.zip # For Windows +``` + +After downloading, you can extract the IoTDB tarball using the following operations: + +``` +Shell > uzip iotdb-.zip # For Windows +Shell > tar -zxf iotdb-.tar.gz # For Linux or MacOS +``` + +The IoTDB project will be at the subfolder named iotdb. The folder will include the following contents: + +``` +iotdb/ <-- root path +| ++- bin/ <-- script files +| ++- conf/ <-- configuration files +| ++- lib/ <-- project dependencies +| ++- LICENSE <-- LICENSE +``` + +### Installation from source code + +Use git to get IoTDB source code: + +``` +Shell > git clone https://github.com/apache/incubator-iotdb.git +``` + +Or: + +``` +Shell > git clone git@github.com:apache/incubator-iotdb.git +``` + +Now suppose your directory is like this: + +``` +> pwd +/workspace/incubator-iotdb + +> ls -l +incubator-iotdb/ <-- root path +| ++- iotdb/ +| ++- jdbc/ +| ++- iotdb-cli/ +| +... +| ++- pom.xml +``` + +Let $IOTDB_HOME = /workspace/incubator-iotdb/iotdb/iotdb/ +Let $IOTDB_CLI_HOME = /workspace/incubator-iotdb/iotdb-cli/cli/ + +Note: +* if `IOTDB_HOME` is not explicitly assigned, +then by default `IOTDB_HOME` is the direct parent directory of `bin/start-server.sh` on Unix/OS X +(or that of `bin\start-server.bat` on Windows). + +* if `IOTDB_CLI_HOME` is not explicitly assigned, +then by default `IOTDB_CLI_HOME` is the direct parent directory of `bin/start-client.sh` on +Unix/OS X (or that of `bin\start-client.bat` on Windows). + +If you are not the first time that building IoTDB, remember deleting the following files: + +``` +> rm -rf $IOTDB_HOME/data/ +> rm -rf $IOTDB_HOME/lib/ +``` + +Then under the root path of incubator-iotdb, you can build IoTDB using Maven: + +``` +> pwd +/workspace/incubator-iotdb + +> mvn clean package -pl iotdb -am -Dmaven.test.skip=true +``` + +If successful, you will see the the following text in the terminal: + +``` +[INFO] ------------------------------------------------------------------------ +[INFO] Reactor Summary: +[INFO] +[INFO] IoTDB Root ......................................... SUCCESS [ 7.020 s] +[INFO] TsFile ............................................. SUCCESS [ 10.486 s] +[INFO] Service-rpc ........................................ SUCCESS [ 3.717 s] +[INFO] IoTDB Jdbc ......................................... SUCCESS [ 3.076 s] +[INFO] IoTDB .............................................. SUCCESS [ 8.258 s] +[INFO] ------------------------------------------------------------------------ +[INFO] BUILD SUCCESS +[INFO] ------------------------------------------------------------------------ +``` + +Otherwise, you may need to check the error statements and fix the problems. + +After building, the IoTDB project will be at the subfolder named iotdb. The folder will include the following contents: + +``` +$IOTDB_HOME/ +| ++- bin/ <-- script files +| ++- conf/ <-- configuration files +| ++- lib/ <-- project dependencies +``` + + + +### Installation by Docker (Dockerfile) + +You can build and run a IoTDB docker image by following the guide of [Deployment by Docker](#build-and-use-iotdb-by-dockerfile) diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/2-Configuration.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/2-Configuration.md new file mode 100644 index 0000000000..e512d5439e --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/2-Configuration.md @@ -0,0 +1,329 @@ + + +# Chapter 4: Deployment and Management + +## Configuration + + +Before starting to use IoTDB, you need to config the configuration files first. For your convenience, we have already set the default config in the files. + +In total, we provide users three kinds of configurations module: + +* environment configuration file (iotdb-env.bat, iotdb-env.sh). The default configuration file for the environment configuration item. Users can configure the relevant system configuration items of JAVA-JVM in the file. +* system configuration file (tsfile-format.properties, iotdb-engine.properties). + * tsfile-format.properties: The default configuration file for the IoTDB file layer configuration item. Users can configure the information about the TsFile, such as the data size written to the disk per time(group\_size\_in_byte). + * iotdb-engine.properties: The default configuration file for the IoTDB engine layer configuration item. Users can configure the IoTDB engine related parameters in the file, such as JDBC service listening port (rpc\_port), unsequence data storage directory (unsequence\_data\_dir), etc. +* log configuration file (logback.xml) + +The configuration files of the three configuration items are located in the IoTDB installation directory: $IOTDB_HOME/conf folder. + +### IoTDB Environment Configuration File + +The environment configuration file is mainly used to configure the Java environment related parameters when IoTDB Server is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the IoTDB Server starts. Users can view the contents of the environment configuration file by viewing the iotdb-env.sh (or iotdb-env.bat) file. + +The detail of each variables are as follows: + +* JMX\_LOCAL + +|Name|JMX\_LOCAL| +|:---:|:---| +|Description|JMX monitoring mode, configured as yes to allow only local monitoring, no to allow remote monitoring| +|Type|Enum String: "yes", "no"| +|Default|yes| +|Effective|After restart system| + + +* JMX\_PORT + +|Name|JMX\_PORT| +|:---:|:---| +|Description|JMX listening port. Please confirm that the port is not a system reserved port and is not occupied| +|Type|Short Int: [0,65535]| +|Default|31999| +|Effective|After restart system| + +* MAX\_HEAP\_SIZE + +|Name|MAX\_HEAP\_SIZE| +|:---:|:---| +|Description|The maximum heap memory size that IoTDB can use at startup.| +|Type|String| +|Default| On Linux or MacOS, the default is one quarter of the memory. On Windows, the default value for 32-bit systems is 512M, and the default for 64-bit systems is 2G.| +|Effective|After restart system| + +* HEAP\_NEWSIZE + +|Name|HEAP\_NEWSIZE| +|:---:|:---| +|Description|The minimum heap memory size that IoTDB can use at startup.| +|Type|String| +|Default| On Linux or MacOS, the default is min{cores * 100M, one quarter of MAX\_HEAP\_SIZE}. On Windows, the default value for 32-bit systems is 512M, and the default for 64-bit systems is 2G.| +|Effective|After restart system| + +### IoTDB System Configuration File + +#### File Layer + +* compressor + +|Name|compressor| +|:---:|:---| +|Description|Data compression method| +|Type|Enum String : “UNCOMPRESSED”, “SNAPPY”| +|Default| UNCOMPRESSED | +|Effective|Immediately| + +* group\_size\_in\_byte + +|Name|group\_size\_in\_byte| +|:---:|:---| +|Description|The data size written to the disk per time| +|Type|Int32| +|Default| 134217728 | +|Effective|Immediately| + +* page\_size\_in\_byte + +|Name| page\_size\_in\_byte | +|:---:|:---| +|Description|The maximum size of a single page written in memory when each column in memory is written (in bytes)| +|Type|Int32| +|Default| 134217728 | +|Effective|Immediately| + +* max\_number\_of\_points\_in\_page + +|Name| max\_number\_of\_points\_in\_page | +|:---:|:---| +|Description|The maximum number of data points (timestamps - valued groups) contained in a page| +|Type|Int32| +|Default| 1048576 | +|Effective|Immediately| + +* max\_string\_length + +|Name| max\_string\_length | +|:---:|:---| +|Description|The maximum length of a single string (number of character)| +|Type|Int32| +|Default| 128 | +|Effective|Immediately| + +* time\_series\_data\_type + +|Name| time\_series\_data\_type | +|:---:|:---| +|Description|Timestamp data type| +|Type|Enum String: "INT32", "INT64"| +|Default| Int64 | +|Effective|Immediately| + +* time\_encoder + +|Name| time\_encoder | +|:---:|:---| +|Description| Encoding type of time column| +|Type|Enum String: “TS_2DIFF”,“PLAIN”,“RLE”| +|Default| TS_2DIFF | +|Effective|Immediately| + +* value\_encoder + +|Name| value\_encoder | +|:---:|:---| +|Description| Encoding type of value column| +|Type|Enum String: “TS_2DIFF”,“PLAIN”,“RLE”| +|Default| PLAIN | +|Effective|Immediately| + +* float_precision + +|Name| float_precision | +|:---:|:---| +|Description| The precision of the floating point number.(The number of digits after the decimal point) | +|Type|Int32| +|Default| The default is 2 digits. Note: The 32-bit floating point number has a decimal precision of 7 bits, and the 64-bit floating point number has a decimal precision of 15 bits. If the setting is out of the range, it will have no practical significance. | +|Effective|Immediately| + +#### Engine Layer + +* rpc\_address + +|Name| rpc\_address | +|:---:|:---| +|Description| The jdbc service listens on the address.| +|Type|String| +|Default| "0.0.0.0" | +|Effective|After restart system| + +* rpc\_port + +|Name| rpc\_port | +|:---:|:---| +|Description| The jdbc service listens on the port. Please confirm that the port is not a system reserved port and is not occupied.| +|Type|Short Int : [0,65535]| +|Default| 6667 | +|Effective|After restart system| + +* time\_zone + +|Name| time\_zone | +|:---:|:---| +|Description| The time zone in which the server is located, the default is Beijing time (+8) | +|Type|Time Zone String| +|Default| +08:00 | +|Effective|After restart system| + +* base\_dir + +|Name| base\_dir | +|:---:|:---| +|Description| The IoTDB system folder. It is recommended to use an absolute path. | +|Type|String| +|Default| data | +|Effective|After restart system| + +* data\_dirs + +|Name| data\_dirs | +|:---:|:---| +|Description| The directories of data files. Multiple directories are separated by comma. See the [mult\_dir\_strategy](chapter4,multdirstrategy) configuration item for data distribution strategy. The starting directory of the relative path is related to the operating system. It is recommended to use an absolute path. If the path does not exist, the system will automatically create it.| +|Type|String[]| +|Default| data/data | +|Effective|After restart system| + +* wal\_dir + +|Name| wal\_dir | +|:---:|:---| +|Description| Write Ahead Log storage path. It is recommended to use an absolute path. | +|Type|String| +|Default| data/wal | +|Effective|After restart system| + +* enable\_wal + +|Name| enable\_wal | +|:---:|:---| +|Description| Whether to enable the pre-write log. The default value is true(enabled), and false means closed. | +|Type|Bool| +|Default| true | +|Effective|After restart system| + +* mult\_dir\_strategy + +|Name| mult\_dir\_strategy | +|:---:|:---| +|Description| IoTDB's strategy for selecting directories for TsFile in tsfile_dir. You can use a simple class name or a full name of the class. The system provides the following three strategies:
1. SequenceStrategy: IoTDB selects the directory from tsfile\_dir in order, traverses all the directories in tsfile\_dir in turn, and keeps counting;
2. MaxDiskUsableSpaceFirstStrategy: IoTDB first selects the directory with the largest free disk space in tsfile\_dir;
3. MinFolderOccupiedSpaceFirstStrategy: IoTDB prefers the directory with the least space used in tsfile\_dir;
4. (user-defined policy)
You can complete a user-defined policy in the following ways:
1. Inherit the cn.edu.tsinghua.iotdb.conf.directories.strategy.DirectoryStrategy class and implement its own Strategy method;
2. Fill in the configuration class with the full class name of the implemented class (package name plus class name, UserDfineStrategyPackage);
3. Add the jar file to the project. | +|Type|String| +|Default| MaxDiskUsableSpaceFirstStrategy | +|Effective|After restart system| + +* tsfile\_size\_threshold + +|Name| tsfile\_size\_threshold | +|:---:|:---| +|Description| When a TsFile size on the disk exceeds this threshold, the TsFile is closed and open a new TsFile to accept data writes. The unit is byte and the default value is 2G.| +|Type| Int64 | +|Default| 536870912 | +|Effective|After restart system| + +* flush\_wal\_threshold + +|Name| flush\_wal\_threshold | +|:---:|:---| +|Description| After the WAL reaches this value, it is flushed to disk, and it is possible to lose at most flush_wal_threshold operations. | +|Type|Int32| +|Default| 10000 | +|Effective|After restart system| + +* flush\_wal\_period\_in\_ms + +|Name| force\_wal\_period\_in\_ms | +|:---:|:---| +|Description| The period during which the log is periodically forced to flush to disk(in milliseconds) | +|Type|Int32| +|Default| 10 | +|Effective|After restart system| + +* fetch\_size + +|Name| fetch\_size | +|:---:|:---| +|Description| The amount of data read each time in batch (the number of data strips, that is, the number of different timestamps.) | +|Type|Int32| +|Default| 10000 | +|Effective|After restart system| + +* merge\_concurrent\_threads + +|Name| merge\_concurrent\_threads | +|:---:|:---| +|Description| THe max threads which can be used when unsequence data is merged. The larger it is, the more IO and CPU cost. The smaller the value, the more the disk is occupied when the unsequence data is too large, the reading will be slower. | +|Type|Int32| +|Default| 10 | +|Effective|After restart system| + +* enable\_stat\_monitor + +|Name| enable\_stat\_monitor | +|:---:|:---| +|Description| Whether to enable background statistics| +|Type| Boolean | +|Default| true | +|Effective|After restart system| + +* back\_loop\_period_in_second + +|Name| back\_loop\_period\_in\_second | +|:---:|:---| +|Description| The frequency at which the system statistic module triggers(in seconds). | +|Type|Int32| +|Default| 5 | +|Effective|After restart system| + +* concurrent\_flush\_thread + +|Name| concurrent\_flush\_thread | +|:---:|:---| +|Description| The thread number used to perform the operation when IoTDB writes data in memory to disk. If the value is less than or equal to 0, then the number of CPU cores installed on the machine is used. The default is 0.| +|Type| Int32 | +|Default| 0 | +|Effective|After restart system| + +* stat\_monitor\_detect\_freq\_sec + +|Name| concurrent\_flush\_thread | +|:---:|:---| +|Description| The time interval which the system check whether the current record statistic time range exceeds stat_monitor_retain_interval every time (in seconds) and perform regular cleaning| +|Type| Int32 | +|Default|600 | +|Effective|After restart system| + +* stat\_monitor\_retain\_interval\_sec + +|Name| stat\_monitor\_retain\_interval\_sec | +|:---:|:---| +|Description| The retention time of system statistics data(in seconds). Statistics data over the retention time range will be cleaned regularly.| +|Type| Int32 | +|Default|600 | +|Effective|After restart system| diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/3-System Monitor.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/3-System Monitor.md new file mode 100644 index 0000000000..6d7c76e40e --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/3-System Monitor.md @@ -0,0 +1,359 @@ + + +# Chapter 4: Deployment and Management + +## System Monitor + +Currently, IoTDB provides users to use Java's JConsole tool to monitor system status or use IoTDB's open API to check data status. + +### System Status Monitoring + +After starting JConsole tool and connecting to IoTDB server, you will have a basic look at IoTDB system status(CPU Occupation, in-memory information, etc.). See [official documentation](https://docs.oracle.com/javase/7/docs/technotes/guides/management/jconsole.html) for more informations. + +#### JMX MBean Monitoring +By using JConsole tool and connecting with JMX you can see some system statistics and parameters. +This section describes how to use the JConsole ```Mbean``` tab to monitor the number of files opened by the IoTDB service process, the size of the data file, and so on. Once connected to JMX, you can find the ```MBean``` named ```org.apache.iotdb.service``` through the ```MBeans``` tab, as shown in the following Figure. + + + +There are several attributes under Monitor, including the numbers of files opened in different folders, the data file size statistics and the values of some system parameters. By double-clicking the value corresponding to an attribute it can also display a line chart of that attribute. In particular, all the opened file count statistics are currently only supported on ```MacOS``` and most ```Linux``` distro except ```CentOS```. For the OS not supported these statistics will return ```-2```. See the following section for specific introduction of the Monitor attributes. + +##### MBean Monitor Attributes List + +* DataSizeInByte + +|Name| DataSizeInByte | +|:---:|:---| +|Description| The total size of data file.| +|Unit| Byte | +|Type| Long | + +* FileNodeNum + +|Name| FileNodeNum | +|:---:|:---| +|Description| The count number of FileNode. (Currently not supported)| +|Type| Long | + +* OverflowCacheSize + +|Name| OverflowCacheSize | +|:---:|:---| +|Description| The size of out-of-order data cache. (Currently not supported)| +|Unit| Byte | +|Type| Long | + +* BufferWriteCacheSize + +|Name| BufferWriteCacheSize | +|:---:|:---| +|Description| The size of BufferWriter cache. (Currently not supported)| +|Unit| Byte | +|Type| Long | + +* BaseDirectory + +|Name| BaseDirectory | +|:---:|:---| +|Description| The absolute directory of data file. | +|Type| String | + +* WriteAheadLogStatus + +|Name| WriteAheadLogStatus | +|:---:|:---| +|Description| The status of write-ahead-log (WAL). ```True``` means WAL is enabled. | +|Type| Boolean | + +* TotalOpenFileNum + +|Name| TotalOpenFileNum | +|:---:|:---| +|Description| All the opened file number of IoTDB server process. | +|Type| Int | + +* DeltaOpenFileNum + +|Name| DeltaOpenFileNum | +|:---:|:---| +|Description| The opened TsFile file number of IoTDB server process. | +|Default Directory| /data/data/settled | +|Type| Int | + +* WalOpenFileNum + +|Name| WalOpenFileNum | +|:---:|:---| +|Description| The opened write-ahead-log file number of IoTDB server process. | +|Default Directory| /data/wal | +|Type| Int | + +* MetadataOpenFileNum + +|Name| MetadataOpenFileNum | +|:---:|:---| +|Description| The opened meta-data file number of IoTDB server process. | +|Default Directory| /data/system/schema | +|Type| Int | + +* DigestOpenFileNum + +|Name| DigestOpenFileNum | +|:---:|:---| +|Description| The opened info file number of IoTDB server process. | +|Default Directory| /data/system/info | +|Type| Int | + +* SocketOpenFileNum + +|Name| SocketOpenFileNum | +|:---:|:---| +|Description| The Socket link (TCP or UDP) number of the operation system. | +|Type| Int | + +* MergePeriodInSecond + +|Name| MergePeriodInSecond | +|:---:|:---| +|Description| The interval at which the IoTDB service process periodically triggers the merge process. | +|Unit| Second | +|Type| Long | + +* ClosePeriodInSecond + +|Name| ClosePeriodInSecond | +|:---:|:---| +|Description| The interval at which the IoTDB service process periodically flushes memory data to disk. | +|Unit| Second | +|Type| Long | + +### Data Status Monitoring + +This module is the statistical monitoring method provided by IoTDB for users to store data information. We will record the statistical data in the system and store it in the database. The current 0.7.0 version of IoTDB provides statistics for writing data. + +The user can choose to enable or disable the data statistics monitoring function (set the `enable_stat_monitor` item in the configuration file, see [Engine Layer](chapter4,enginelayer) for details). + +#### Writing Data Monitor + +The current statistics of writing data by the system can be divided into two major modules: **Global Writing Data Statistics** and **Storage Group Writing Data Statistics**. **Global Writing Data Statistics** records the point number written by the user and the number of requests. **Storage Group Writing Data Statistics** records data of a certain storage group. + +The system defaults to collect data every 5 seconds, and writes the statistics to the IoTDB and stores them in a system-specified locate. (If you need to change the statistic frequency, you can set The `back_loop_period_sec entry` in the configuration file, see Section [Engine Layer](chapter4,enginelayer) for details). After the system is refreshed or restarted, IoTDB does not recover the statistics, and the statistics data will restart from zero. + +In order to avoid the excessive use of statistical information, we add a mechanism to periodically clear invalid data for statistical information. The system will delete invalid data at regular intervals. The user can set the trigger frequency (`stat_monitor_retain_interval_sec`, default is 600s, see section [Engine Layer](chapter4,enginelayer) for details) to set the frequency of deleting data. By setting the valid data duration (`stat_monitor_detect_freq_sec entry`, the default is 600s, see section [Engine Layer](chapter4,enginelayer) for details) to set the time period of valid data, that is, the data within the time of the clear operation trigger time is stat_monitor_detect_freq_sec is valid data. In order to ensure the stability of the system, it is not allowed to delete the statistics frequently. Therefore, if the configuration parameter time is less than the default value (600s), the system will abort the configuration parameter and uses the default parameter. + +It's convenient for you to use `select` clause to get the writing data statistics the same as other timeseires. + +Here are the writing data statistics: + +* TOTAL_POINTS (GLOABAL) + +|Name| TOTAL\_POINTS | +|:---:|:---| +|Description| Calculate the global writing points number.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.global.TOTAL\_POINTS | +|Reset After Restarting System| yes | +|Example| select TOTAL_POINTS from root.stats.write.global| + +* TOTAL\_REQ\_SUCCESS (GLOABAL) + +|Name| TOTAL\_REQ\_SUCCESS | +|:---:|:---| +|Description| Calculate the global successful requests number.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.global.TOTAL\_REQ\_SUCCESS | +|Reset After Restarting System| yes | +|Example| select TOTAL\_REQ\_SUCCESS from root.stats.write.global| + +* TOTAL\_REQ\_FAIL (GLOABAL) + +|Name| TOTAL\_REQ\_FAIL | +|:---:|:---| +|Description| Calculate the global failed requests number.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.global.TOTAL\_REQ\_FAIL | +|Reset After Restarting System| yes | +|Example| select TOTAL\_REQ\_FAIL from root.stats.write.global| + + +* TOTAL\_POINTS\_FAIL (GLOABAL) + +|Name| TOTAL\_POINTS\_FAIL | +|:---:|:---| +|Description| Calculate the global failed writing points number.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.global.TOTAL\_POINTS\_FAIL | +|Reset After Restarting System| yes | +|Example| select TOTAL\_POINTS\_FAIL from root.stats.write.global| + + +* TOTAL\_POINTS\_SUCCESS (GLOABAL) + +|Name| TOTAL\_POINTS\_SUCCESS | +|:---:|:---| +|Description| Calculate the c.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.global.TOTAL\_POINTS\_SUCCESS | +|Reset After Restarting System| yes | +|Example| select TOTAL\_POINTS\_SUCCESS from root.stats.write.global| + +* TOTAL\_REQ\_SUCCESS (STORAGE GROUP) + +|Name| TOTAL\_REQ\_SUCCESS | +|:---:|:---| +|Description| Calculate the successful requests number for specific storage group| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.\.TOTAL\_REQ\_SUCCESS | +|Reset After Restarting System| yes | +|Example| select TOTAL\_REQ\_SUCCESS from root.stats.write.\| + +* TOTAL\_REQ\_FAIL (STORAGE GROUP) + +|Name| TOTAL\_REQ\_FAIL | +|:---:|:---| +|Description| Calculate the fail requests number for specific storage group| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.\.TOTAL\_REQ\_FAIL | +|Reset After Restarting System| yes | +|Example| select TOTAL\_REQ\_FAIL from root.stats.write.\| + + +* TOTAL\_POINTS\_SUCCESS (STORAGE GROUP) + +|Name| TOTAL\_POINTS\_SUCCESS | +|:---:|:---| +|Description| Calculate the successful writing points number for specific storage group.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.\.TOTAL\_POINTS\_SUCCESS | +|Reset After Restarting System| yes | +|Example| select TOTAL\_POINTS\_SUCCESS from root.stats.write.\| + + +* TOTAL\_POINTS\_FAIL (STORAGE GROUP) + +|Name| TOTAL\_POINTS\_FAIL | +|:---:|:---| +|Description| Calculate the fail writing points number for specific storage group.| +|Type| Writing data statistics | +|Timeseries Name| root.stats.write.\.TOTAL\_POINTS\_FAIL | +|Reset After Restarting System| yes | +|Example| select TOTAL\_POINTS\_FAIL from root.stats.write.\| + +> Note: +> +> \ should be replaced by real storage group name, and the '.' in storage group need to be replaced by '_'. For example, the storage group name is 'root.a.b', when using in the statistics, it will change to 'root\_a\_b' + +##### Example + +Here we give some example of using writing data statistics. + +If you want to know the global successful writing points number, you can use `select` clause to query it's value. The query statement is like this: + +``` +select TOTAL_POINTS_SUCCESS from root.stats.write.global +``` + +If you want to know the successfule writing points number of root.ln (storage group), here is the query statement: + +``` +select TOTAL_POINTS_SUCCESS from root.stats.write.root_ln +``` + +If you want to know the current timeseries point in the system, you can use `MAX_VALUE` function to query. Here is the query statement: + +``` +select MAX_VALUE(TOTAL_POINTS_SUCCESS) from root.stats.write.root_ln +``` + +#### File Size Monitor + +Sometimes we are concerned about how the data file size of IoTDB is changing, maybe to help calculate how much disk space is left or the data ingestion speed. The File Size Monitor provides several statistics to show how different types of file-sizes change. + +The file size monitor defaults to collect file size data every 5 seconds using the same shared parameter ```back_loop_period_sec```, + +Unlike Writing Data Monitor, currently File Size Monitor will not delete statistic data at regular intervals. + +You can also use `select` clause to get the file size statistics like other time series. + +Here are the file size statistics: + +* DATA + +|Name| DATA | +|:---:|:---| +|Description| Calculate the sum of all the files's sizes under the data directory (```data/data``` by default) in byte.| +|Type| File size statistics | +|Timeseries Name| root.stats.file\_size.DATA | +|Reset After Restarting System| No | +|Example| select DATA from root.stats.file\_size.DATA| + +* SETTLED + +|Name| SETTLED | +|:---:|:---| +|Description| Calculate the sum of all the ```TsFile``` size (under ```data/data/settled``` by default) in byte. If there are multiple ```TsFile``` directories like ```{data/data/settled1, data/data/settled2}```, this statistic is the sum of their size.| +|Type| File size statistics | +|Timeseries Name| root.stats.file\_size.SETTLED | +|Reset After Restarting System| No | +|Example| select SETTLED from root.stats.file\_size.SETTLED| + +* OVERFLOW + +|Name| OVERFLOW | +|:---:|:---| +|Description| Calculate the sum of all the ```out-of-order data file``` size (under ```data/data/unsequence``` by default) in byte.| +|Type| File size statistics | +|Timeseries Name| root.stats.file\_size.OVERFLOW | +|Reset After Restarting System| No | +|Example| select OVERFLOW from root.stats.file\_size.OVERFLOW| + + +* WAL + +|Name| WAL | +|:---:|:---| +|Description| Calculate the sum of all the ```Write-Ahead-Log file``` size (under ```data/wal``` by default) in byte.| +|Type| File size statistics | +|Timeseries Name| root.stats.file\_size.WAL | +|Reset After Restarting System| No | +|Example| select WAL from root.stats.file\_size.WAL| + + +* INFO + +|Name| INFO| +|:---:|:---| +|Description| Calculate the sum of all the ```.restore```, etc. file size (under ```data/system/info```) in byte.| +|Type| File size statistics | +|Timeseries Name| root.stats.file\_size.INFO | +|Reset After Restarting System| No | +|Example| select INFO from root.stats.file\_size.INFO| + +* SCHEMA + +|Name| SCHEMA | +|:---:|:---| +|Description| Calculate the sum of all the ```metadata file``` size (under ```data/system/metadata```) in byte.| +|Type| File size statistics | +|Timeseries Name| root.stats.file\_size.SCHEMA | +|Reset After Restarting System| No | +|Example| select SCHEMA from root.stats.file\_size.SCHEMA| diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/4-System log.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/4-System log.md new file mode 100644 index 0000000000..b7ee607fb5 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/4-System log.md @@ -0,0 +1,66 @@ + + +# Chapter 4: Deployment and Management + +## System log + +IoTDB allows users to configure IoTDB system logs (such as log output level) by modifying the log configuration file. The default location of the system log configuration file is in \$IOTDB_HOME/conf folder. + +The default log configuration file is named logback.xml. The user can modify the configuration of the system running log by adding or changing the xml tree node parameters. It should be noted that the configuration of the system log using the log configuration file does not take effect immediately after the modification, instead, it will take effect after restarting the system. The usage of logback.xml is just as usual. + +At the same time, in order to facilitate the debugging of the system by the developers and DBAs, we provide several JMX interface to dynamically modify the log configuration, and configure the Log module of the system in real time without restarting the system. For detailed usage, see [Dynamic System Log Configuration](Chap4dynamicsystemlog) section. + +### Dynamic System Log Configuration + +#### Connect JMX + +Here we use JConsole to connect with JMX. + +Start the JConsole, establish a new JMX connection with the IoTDB Server (you can select the local process or input the IP and PORT for remote connection, the default operation port of the IoTDB JMX service is 31999). Fig 4.1 shows the connection GUI of JConsole. + + + +After connected, click `MBean` and find `ch.qos.logback.classic.default.ch.qos.logback.classic.jmx.JMXConfigurator`(As shown in fig 4.2). + + +In the JMXConfigurator Window, there are 6 operation provided for you, as shown in fig 4.3. You can use there interface to perform operation. + + + +#### Interface Instruction + +* reloadDefaultConfiguration + +This method is to reload the default logback configuration file. The user can modify the default configuration file first, and then call this method to reload the modified configuration file into the system to take effect. + +* reloadByFileName + +This method loads a logback configuration file with the specified path and name, and then makes it take effect. This method accepts a parameter of type String named p1, which is the path to the configuration file that needs to be specified for loading. + +* getLoggerEffectiveLevel + +This method is to obtain the current log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level currently in effect for the specified Logger. + +* getLoggerLevel + +This method is to obtain the log level of the specified Logger. This method accepts a String type parameter named p1, which is the name of the specified Logger. This method returns the log level of the specified Logger. +It should be noted that the difference between this method and the `getLoggerEffectiveLevel` method is that the method returns the log level that the specified Logger is set in the configuration file. If the user does not set the log level for the Logger. , then return empty. According to Logre's log-level inheritance mechanism, if a Logger is not displayed to set the log level, it will inherit the log level settings from its nearest ancestor. At this point, calling the `getLoggerEffectiveLevel` method will return the log level in which the Logger is in effect; calling the methods described in this section will return null. diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/5-Data Management.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/5-Data Management.md new file mode 100644 index 0000000000..851cfff084 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/5-Data Management.md @@ -0,0 +1,77 @@ + + +# Chapter 4: Deployment and Management + + +## Data Management + +In IoTDB, there are many kinds of data needed to be storage. In this section, we will introduce IoTDB's data storage strategy in order to give you an intuitive understanding of IoTDB's data management. + +The data that IoTDB stores is divided into three categories, namely data files, system files, and pre-write log files. + +### Data Files + +Data files store all the data that the user wrote to IoTDB, which contains TsFile and other files. TsFile storage directory can be configured with the `tsfile_dir` configuration item (see [file layer](Chap4filelayer) for details). Other files can be configured through [data_dir](chap4datadir) configuration item (see [Engine Layer](chapter4,enginelayer) for details). + +In order to better support users' storage requirements such as disk space expansion, IoTDB supports multiple file directorys storage methods for TsFile storage configuration. Users can set multiple storage paths as data storage locations( see [tsfile_dir](Chap4,tsfiledir) configuration item), and you can specify or customize the directory selection policy (see [mult_dir_strategy](chapter4,enginelayer) configuration item for details). + +### System Files + +System files include restore files and schema files, which store metadata information of data in IoTDB. It can be configured through the `sys_dir` configuration item (see [System Layer](chapter4,systemlayer) for details). + +### Pre-write Log Files + +Pre-write log files store WAL files. It can be configured through the `wal_dir` configuration item (see [System Layer](chapter4,systemlayer) for details). + +### Example of Setting Data storage Directory + +For a clearer understanding of configuring the data storage directory, we will give an excample in this section. + +All data directory paths involved in storage directory setting are: data_dir, tsfile_dir, mult_dir_strategy, sys_dir, and wal_dir, which refer to data files, stroage strategy, system files, and pre-write log files. You can choose to configure the items you'd like to change, otherwise, you can use the system default configuration item without any operation. + +Here we give an example of a user who configures all five configurations mentioned above. The configuration items are as follow: + +``` +data_dir = D:\\iotdb\\data\\data +tsfile_dir = E:\\iotdb\\data\\data1, data\\data2, F:\\data3 
mult_dir_strategy = MaxDiskUsableSpaceFirstStrategy
sys_dir = data\\system
wal_dir = data + +``` +After setting the configuration, the system will: + +* Save all data files except TsFile in D:\\iotdb\\data\\data +* Save TsFile in E:\\iotdb\\data\\data1, $IOTDB_HOME\\data\\data2 and F:\\data3. And the choosing strategy is `MaxDiskUsableSpaceFirstStrategy`, that is every time data writes to the disk, the system will automatically select a directory with the largest remaining disk space to write data. +* Save system data in $IOTDB_HOME\\data\\system +* Save WAL data in $IOTDB_HOME\\data + +> Note: +> +> If you change directory names in tsfile_dir, the newer name and the older name should be one-to-one correspondence. Also, the files in the older directory needs to be moved to the newer directory. +> +> If you add some directorys in tsfile_dir, IoTDB will add the path automatically. Nothing needs to do by your own. + +For example, modify the tsfile_dir to: + +``` +tsfile_dir = D:\\data4, E:\\data5, F:\\data6 +``` + +You need to move files in E:\iotdb\data\data1 to D:\data4, move files in %IOTDB_HOME%\data\data2 to E:\data5, move files in F:\data3 to F:\data6. In this way, the system will operation normally. diff --git a/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/6-Build and use IoTDB by Dockerfile.md b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/6-Build and use IoTDB by Dockerfile.md new file mode 100644 index 0000000000..47afb0cbb7 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/4-Deployment and Management/6-Build and use IoTDB by Dockerfile.md @@ -0,0 +1,92 @@ + + +# Chapter 4: Deployment and Management + +## Build and use IoTDB by Dockerfile +Now a Dockerfile has been written at ROOT/docker/Dockerfile on the branch enable_docker_image. + +1. You can build a docker image by: +``` +$ docker build -t iotdb:base git://github.com/apache/incubator-iotdb#master:docker +``` +Or: +``` +$ git clone https://github.com/apache/incubator-iotdb +$ cd incubator-iotdb +$ cd docker +$ docker build -t iotdb:base . +``` +Once the docker image has been built locally (the tag is iotdb:base in this example), you are almost done! + +2. create docker volume for data files and logs: +``` +$ docker volume create mydata +$ docker volume create mylogs +``` +3. run a docker container: +```shell +$ docker run -p 6667:6667 -v mydata:/iotdb/data -v mylogs:/iotdb/logs -d iotdb:base /iotdb/bin/start-server.sh +``` +If success, you can run `docker ps`, and get something like the following: +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +2a68b6944cb5 iotdb:base "/iotdb/bin/start-se…" 4 minutes ago Up 5 minutes 0.0.0.0:6667->6667/tcp laughing_meitner +``` +You can use the above command to get the container ID: +``` +$ docker container ls +``` +suppose the ID is . + +And get the docker IP by: +``` +$ docker inspect --format='{{.NetworkSettings.IPAddress}}' +``` +suppose the IP is . + +4. If you just want to have a try by using iotdb-cli, you can: +``` +$ docker exec -it /bin/bash +$ (now you have enter the container): /cli/bin/start-client.sh -h localhost -p 6667 -u root -pw root +``` + +Or, run a new docker container as the client: +``` +$ docker run -it iotdb:base /cli/bin/start-client.sh -h -p 6667 -u root -pw root +``` +Or, if you have a iotdb-cli locally (e.g., you have compiled the source code by `mvn package`), and suppose your work_dir is cli/bin, then you can just run: +``` +$ start-client.sh -h localhost -p 6667 -u root -pw root +``` +5. If you want to write codes to insert data and query data, please add the following dependence: +```xml + + org.apache.iotdb + iotdb-jdbc + 0.8.0-SNAPSHOT + +``` +Some example about how to use IoTDB with IoTDB-JDBC can be found at: https://github.com/apache/incubator-iotdb/tree/master/jdbc/src/test/java/org/apache/iotdb/jdbc/demo + +(Notice that because we have not published Apache IoTDB version 0.8.0 now, you have to compile the source code by `mvn install -DskipTests` to install the dependence into your local maven repository) + +6. Now enjoy it! diff --git a/docs/Documentation/UserGuideV0.7.0/5-SQL Documentation.md b/docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/1-IoTDB Query Statement.md similarity index 85% rename from docs/Documentation/UserGuideV0.7.0/5-SQL Documentation.md rename to docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/1-IoTDB Query Statement.md index 4627784594..6b4cbba627 100644 --- a/docs/Documentation/UserGuideV0.7.0/5-SQL Documentation.md +++ b/docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/1-IoTDB Query Statement.md @@ -19,20 +19,6 @@ --> - - -- [Chapter 5: IoTDB SQL Documentation](#chapter-5-iotdb-sql-documentation) - - [IoTDB Query Statement](#iotdb-query-statement) - - [Schema Statement](#schema-statement) - - [Data Management Statement](#data-management-statement) - - [Database Management Statement](#database-management-statement) - - [Functions](#functions) - - [Reference](#reference) - - [Keywords](#keywords) - - [Identifiers](#identifiers) - - [Literals](#literals) - - # Chapter 5: IoTDB SQL Documentation In this part, we will introduce you IoTDB's Query Language. IoTDB offers you a SQL-like query language for interacting with IoTDB, the query language can be devided into 4 major parts: @@ -515,118 +501,3 @@ SELECT SUM(Path) (COMMA SUM(Path))* FROM [WHERE ]? Eg. SELECT SUM(temperature) FROM root.ln.wf01.wt01 WHERE root.ln.wf01.wt01.temperature < 24 Note: the statement needs to satisfy this constraint: + = ``` - -## Reference - -### Keywords - -``` -Keywords for IoTDB (case insensitive): -ADD, BY, COMPRESSOR, CREATE, DATATYPE, DELETE, DESCRIBE, DROP, ENCODING, EXIT, FROM, GRANT, GROUP, LABLE, LINK, INDEX, INSERT, INTO, LOAD, MAX_POINT_NUMBER, MERGE, METADATA, ON, ORDER, PASSWORD, PRIVILEGES, PROPERTY, QUIT, REVOKE, ROLE, ROOT, SELECT, SET, SHOW, STORAGE, TIME, TIMESERIES, TIMESTAMP, TO, UNLINK, UPDATE, USER, USING, VALUE, VALUES, WHERE, WITH - -Keywords with special meanings (case sensitive): -* Data Types: BOOLEAN, DOUBLE, FLOAT, INT32, INT64, TEXT (Only capitals is acceptable) -* Encoding Methods: BITMAP, DFT, GORILLA, PLAIN, RLE, TS_2DIFF (Only capitals is acceptable) -* Compression Methods: UNCOMPRESSED, SNAPPY (Only capitals is acceptable) -* Logical symbol: AND, &, &&, OR, | , ||, NOT, !, TRUE, FALSE -``` - -### Identifiers - -``` -QUOTE := '\''; -DOT := '.'; -COLON : ':' ; -COMMA := ',' ; -SEMICOLON := ';' ; -LPAREN := '(' ; -RPAREN := ')' ; -LBRACKET := '['; -RBRACKET := ']'; -EQUAL := '=' | '=='; -NOTEQUAL := '<>' | '!='; -LESSTHANOREQUALTO := '<='; -LESSTHAN := '<'; -GREATERTHANOREQUALTO := '>='; -GREATERTHAN := '>'; -DIVIDE := '/'; -PLUS := '+'; -MINUS := '-'; -STAR := '*'; -Letter := 'a'..'z' | 'A'..'Z'; -HexDigit := 'a'..'f' | 'A'..'F'; -Digit := '0'..'9'; -Boolean := TRUE | FALSE | 0 | 1 (case insensitive) - -``` - -``` -StringLiteral := ( '\'' ( ~('\'') )* '\'' | '\"' ( ~('\"') )* '\"'); -eg. ‘abc’ -eg. “abc” -``` - -``` -Integer := ('-' | '+')? Digit+; -eg. 123 -eg. -222 -``` - -``` -Float := ('-' | '+')? Digit+ DOT Digit+ (('e' | 'E') ('-' | '+')? Digit+)?; -eg. 3.1415 -eg. 1.2E10 -eg. -1.33 -``` - -``` -Identifier := (Letter | '_') (Letter | Digit | '_' | MINUS)*; -eg. a123 -eg. _abc123 - -``` - -### Literals - - -``` -PointValue : Integer | Float | StringLiteral | Boolean -``` -``` -TimeValue : Integer | DateTime | ISO8601 | NOW() -Note: Integer means timestamp type. - -DateTime : -eg. 2016-11-16T16:22:33+08:00 -eg. 2016-11-16 16:22:33+08:00 -eg. 2016-11-16T16:22:33.000+08:00 -eg. 2016-11-16 16:22:33.000+08:00 -Note: DateTime Type can support several types, see Chapter 3 Datetime section for details. -``` -``` -PrecedenceEqualOperator : EQUAL | NOTEQUAL | LESSTHANOREQUALTO | LESSTHAN | GREATERTHANOREQUALTO | GREATERTHAN -``` -``` -Timeseries : ROOT [DOT ]* DOT -LayerName : Identifier -SensorName : Identifier -eg. root.ln.wf01.wt01.status -eg. root.sgcc.wf03.wt01.temperature -Note: Timeseries must be start with `root`(case insensitive) and end with sensor name. -``` - -``` -PrefixPath : ROOT (DOT )* -LayerName : Identifier | STAR -eg. root.sgcc -eg. root.* -``` -``` -Path: (ROOT | ) (DOT )* -LayerName: Identifier | STAR -eg. root.ln.wf01.wt01.status -eg. root.*.wf01.wt01.status -eg. root.ln.wf01.wt01.* -eg. *.wt01.* -eg. * -``` diff --git a/docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/2-Reference.md b/docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/2-Reference.md new file mode 100644 index 0000000000..02dea85382 --- /dev/null +++ b/docs/Documentation/UserGuideV0.7.0/5-IoTDB SQL Documentation/2-Reference.md @@ -0,0 +1,137 @@ + + +# Chapter 5: IoTDB SQL Documentation + +## Reference + +### Keywords + +``` +Keywords for IoTDB (case insensitive): +ADD, BY, COMPRESSOR, CREATE, DATATYPE, DELETE, DESCRIBE, DROP, ENCODING, EXIT, FROM, GRANT, GROUP, LABLE, LINK, INDEX, INSERT, INTO, LOAD, MAX_POINT_NUMBER, MERGE, METADATA, ON, ORDER, PASSWORD, PRIVILEGES, PROPERTY, QUIT, REVOKE, ROLE, ROOT, SELECT, SET, SHOW, STORAGE, TIME, TIMESERIES, TIMESTAMP, TO, UNLINK, UPDATE, USER, USING, VALUE, VALUES, WHERE, WITH + +Keywords with special meanings (case sensitive): +* Data Types: BOOLEAN, DOUBLE, FLOAT, INT32, INT64, TEXT (Only capitals is acceptable) +* Encoding Methods: BITMAP, DFT, GORILLA, PLAIN, RLE, TS_2DIFF (Only capitals is acceptable) +* Compression Methods: UNCOMPRESSED, SNAPPY (Only capitals is acceptable) +* Logical symbol: AND, &, &&, OR, | , ||, NOT, !, TRUE, FALSE +``` + +### Identifiers + +``` +QUOTE := '\''; +DOT := '.'; +COLON : ':' ; +COMMA := ',' ; +SEMICOLON := ';' ; +LPAREN := '(' ; +RPAREN := ')' ; +LBRACKET := '['; +RBRACKET := ']'; +EQUAL := '=' | '=='; +NOTEQUAL := '<>' | '!='; +LESSTHANOREQUALTO := '<='; +LESSTHAN := '<'; +GREATERTHANOREQUALTO := '>='; +GREATERTHAN := '>'; +DIVIDE := '/'; +PLUS := '+'; +MINUS := '-'; +STAR := '*'; +Letter := 'a'..'z' | 'A'..'Z'; +HexDigit := 'a'..'f' | 'A'..'F'; +Digit := '0'..'9'; +Boolean := TRUE | FALSE | 0 | 1 (case insensitive) + +``` + +``` +StringLiteral := ( '\'' ( ~('\'') )* '\'' | '\"' ( ~('\"') )* '\"'); +eg. ‘abc’ +eg. “abc” +``` + +``` +Integer := ('-' | '+')? Digit+; +eg. 123 +eg. -222 +``` + +``` +Float := ('-' | '+')? Digit+ DOT Digit+ (('e' | 'E') ('-' | '+')? Digit+)?; +eg. 3.1415 +eg. 1.2E10 +eg. -1.33 +``` + +``` +Identifier := (Letter | '_') (Letter | Digit | '_' | MINUS)*; +eg. a123 +eg. _abc123 + +``` + +### Literals + + +``` +PointValue : Integer | Float | StringLiteral | Boolean +``` +``` +TimeValue : Integer | DateTime | ISO8601 | NOW() +Note: Integer means timestamp type. + +DateTime : +eg. 2016-11-16T16:22:33+08:00 +eg. 2016-11-16 16:22:33+08:00 +eg. 2016-11-16T16:22:33.000+08:00 +eg. 2016-11-16 16:22:33.000+08:00 +Note: DateTime Type can support several types, see Chapter 3 Datetime section for details. +``` +``` +PrecedenceEqualOperator : EQUAL | NOTEQUAL | LESSTHANOREQUALTO | LESSTHAN | GREATERTHANOREQUALTO | GREATERTHAN +``` +``` +Timeseries : ROOT [DOT ]* DOT +LayerName : Identifier +SensorName : Identifier +eg. root.ln.wf01.wt01.status +eg. root.sgcc.wf03.wt01.temperature +Note: Timeseries must be start with `root`(case insensitive) and end with sensor name. +``` + +``` +PrefixPath : ROOT (DOT )* +LayerName : Identifier | STAR +eg. root.sgcc +eg. root.* +``` +``` +Path: (ROOT | ) (DOT )* +LayerName: Identifier | STAR +eg. root.ln.wf01.wt01.status +eg. root.*.wf01.wt01.status +eg. root.ln.wf01.wt01.* +eg. *.wt01.* +eg. * +``` diff --git a/docs/Documentation/UserGuideV0.7.0/6-JDBC Documentation.md b/docs/Documentation/UserGuideV0.7.0/6-JDBC API/1-JDBC API.md similarity index 90% rename from docs/Documentation/UserGuideV0.7.0/6-JDBC Documentation.md rename to docs/Documentation/UserGuideV0.7.0/6-JDBC API/1-JDBC API.md index 6c4b5df16c..4d7fd763bf 100644 --- a/docs/Documentation/UserGuideV0.7.0/6-JDBC Documentation.md +++ b/docs/Documentation/UserGuideV0.7.0/6-JDBC API/1-JDBC API.md @@ -19,9 +19,4 @@ --> - - -- [Chaper6: JDBC API](#chaper6-jdbc-api) - - # Chaper6: JDBC API \ No newline at end of file -- GitLab