提交 6b61d3c2 编写于 作者: wmmhello's avatar wmmhello

fix:conflicts from 3.0

...@@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s) ...@@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s)
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用: TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
- [Java](https://docs.taosdata.com/reference/connector/java/) - [Java](https://docs.taosdata.com/connector/java/)
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp) - [C/C++](https://docs.taosdata.com/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/) - [Python](https://docs.taosdata.com/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/) - [Go](https://docs.taosdata.com/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/) - [Node.js](https://docs.taosdata.com/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/) - [Rust](https://docs.taosdata.com/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/) - [C#](https://docs.taosdata.com/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/) - [RESTful API](https://docs.taosdata.com/connector/rest-api/)
# 成为社区贡献者 # 成为社区贡献者
......
...@@ -19,29 +19,29 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde ...@@ -19,29 +19,29 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde
# What is TDengine? # What is TDengine?
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. - **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. - **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. - **[Ease of Use](https://docs.tdengine.com/get-started/docker/)**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. - **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **Open Source**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. - **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
# Documentation # Documentation
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com)) For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
# Building # Building
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source. You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
...@@ -256,6 +256,7 @@ After building successfully, TDengine can be installed by: ...@@ -256,6 +256,7 @@ After building successfully, TDengine can be installed by:
nmake install nmake install
``` ```
<!--
## On macOS platform ## On macOS platform
After building successfully, TDengine can be installed by: After building successfully, TDengine can be installed by:
...@@ -263,6 +264,7 @@ After building successfully, TDengine can be installed by: ...@@ -263,6 +264,7 @@ After building successfully, TDengine can be installed by:
```bash ```bash
sudo make install sudo make install
``` ```
-->
## Quick Run ## Quick Run
...@@ -304,14 +306,14 @@ Query OK, 2 row(s) in set (0.001700s) ...@@ -304,14 +306,14 @@ Query OK, 2 row(s) in set (0.001700s)
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation. TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.taosdata.com/reference/connector/java/) - [Java](https://docs.tdengine.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/) - [C/C++](https://docs.tdengine.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/) - [Python](https://docs.tdengine.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/) - [Go](https://docs.tdengine.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/) - [Node.js](https://docs.tdengine.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/) - [Rust](https://docs.tdengine.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/) - [C#](https://docs.tdengine.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/) - [RESTful API](https://docs.tdengine.com/reference/rest-api/)
# Contribute to TDengine # Contribute to TDengine
......
...@@ -103,6 +103,9 @@ IF (TD_WINDOWS) ...@@ -103,6 +103,9 @@ IF (TD_WINDOWS)
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}") SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}")
ELSE () ELSE ()
IF (${TD_DARWIN})
set(CMAKE_MACOSX_RPATH 0)
ENDIF ()
IF (${COVER} MATCHES "true") IF (${COVER} MATCHES "true")
MESSAGE(STATUS "Test coverage mode, add extra flags") MESSAGE(STATUS "Test coverage mode, add extra flags")
SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage") SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage")
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taos-tools # taos-tools
ExternalProject_Add(taos-tools ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 2af2222 GIT_TAG 833b721
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -104,15 +104,15 @@ Each row contains the device ID, time stamp, collected metrics (current, voltage ...@@ -104,15 +104,15 @@ Each row contains the device ID, time stamp, collected metrics (current, voltage
## Metric ## Metric
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics.
## Label/Tag ## Label/Tag
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags.
## Data Collection Point ## Data Collection Point
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
## Table ## Table
...@@ -137,7 +137,7 @@ The design of one table for one data collection point will require a huge number ...@@ -137,7 +137,7 @@ The design of one table for one data collection point will require a huge number
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established. STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`.
## Subtable ## Subtable
...@@ -156,7 +156,9 @@ The relationship between a STable and the subtables created based on this STable ...@@ -156,7 +156,9 @@ The relationship between a STable and the subtables created based on this STable
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type. Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. ![Meters Data Model Diagram](./supertable.webp)
## Database ## Database
......
...@@ -16,7 +16,7 @@ import CDemo from "./_sub_c.mdx"; ...@@ -16,7 +16,7 @@ import CDemo from "./_sub_c.mdx";
TDengine provides data subscription and consumption interfaces similar to message queue products. These interfaces make it easier for applications to obtain data written to TDengine either in real time and to process data in the order that events occurred. This simplifies your time-series data processing systems and reduces your costs because it is no longer necessary to deploy a message queue product such as Kafka. TDengine provides data subscription and consumption interfaces similar to message queue products. These interfaces make it easier for applications to obtain data written to TDengine either in real time and to process data in the order that events occurred. This simplifies your time-series data processing systems and reduces your costs because it is no longer necessary to deploy a message queue product such as Kafka.
To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, standard table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications. To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications.
By subscribing to a topic, a consumer can obtain the latest data in that topic in real time. Multiple consumers can be formed into a consumer group that consumes messages together. Consumer groups enable faster speed through multi-threaded, distributed data consumption. Note that consumers in different groups that are subscribed to the same topic do not consume messages together. A single consumer can subscribe to multiple topics. If the data in a supertable is sharded across multiple vnodes, consumer groups can consume it much more efficiently than single consumers. TDengine also includes an acknowledgement mechanism that ensures at-least-once delivery in complicated environments where machines may crash or restart. By subscribing to a topic, a consumer can obtain the latest data in that topic in real time. Multiple consumers can be formed into a consumer group that consumes messages together. Consumer groups enable faster speed through multi-threaded, distributed data consumption. Note that consumers in different groups that are subscribed to the same topic do not consume messages together. A single consumer can subscribe to multiple topics. If the data in a supertable is sharded across multiple vnodes, consumer groups can consume it much more efficiently than single consumers. TDengine also includes an acknowledgement mechanism that ensures at-least-once delivery in complicated environments where machines may crash or restart.
......
...@@ -170,71 +170,21 @@ taoscfg: ...@@ -170,71 +170,21 @@ taoscfg:
# number of replications, for cluster only # number of replications, for cluster only
TAOS_REPLICA: "1" TAOS_REPLICA: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
# #
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0" #TAOS_NUM_OF_RPC_THREADS: "2"
# #
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4" #TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report # enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1" #TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds # time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30" #TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only # time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1" #TAOS_STATUS_INTERVAL: "1"
...@@ -245,37 +195,7 @@ taoscfg: ...@@ -245,37 +195,7 @@ taoscfg:
#TAOS_MIN_SLIDING_TIME: "10" #TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second # minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10" #TAOS_MIN_INTERVAL_TIME: "1"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option: # the compressed rpc message, option:
# -1 (no compression) # -1 (no compression)
...@@ -283,17 +203,8 @@ taoscfg: ...@@ -283,17 +203,8 @@ taoscfg:
# > 0 (rpc message body which larger than this value will be compressed) # > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1" #TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode # max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000" #TAOS_MAX_SHELL_CONNS: "50000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value # stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1" #TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
...@@ -313,21 +224,8 @@ taoscfg: ...@@ -313,21 +224,8 @@ taoscfg:
# enable/disable system monitor # enable/disable system monitor
#TAOS_MONITOR: "1" #TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log # enable/disable async log
#TAOS_ASYNC_LOG: "0" #TAOS_ASYNC_LOG: "1"
# #
# time of keeping log files, days # time of keeping log files, days
...@@ -344,25 +242,8 @@ taoscfg: ...@@ -344,25 +242,8 @@ taoscfg:
# debug flag for all log type, take effect when non-zero value\ # debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143" #TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash # generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1" #TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
``` ```
## Scaling Out ## Scaling Out
......
...@@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data ...@@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128` - The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
- Internal function `now` can be used to get the current timestamp on the client side - Internal function `now` can be used to get the current timestamp on the client side
- The current timestamp of the client side is applied when `now` is used to insert data - The current timestamp of the client side is applied when `now` is used to insert data
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT) - Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations. - Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds. Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
......
...@@ -3,7 +3,7 @@ sidebar_label: SHOW Statement ...@@ -3,7 +3,7 @@ sidebar_label: SHOW Statement
title: SHOW Statement for Metadata title: SHOW Statement for Metadata
--- ---
In addition to running SELECT statements on INFORMATION_SCHEMA, you can also use SHOW to obtain system metadata, information, and status. `SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
## SHOW ACCOUNTS ## SHOW ACCOUNTS
......
...@@ -15,6 +15,27 @@ About details of installing TDenine, please refer to [Installation Guide](../../ ...@@ -15,6 +15,27 @@ About details of installing TDenine, please refer to [Installation Guide](../../
## Uninstall ## Uninstall
<Tabs> <Tabs>
<TabItem label="Uninstall apt-get" value="aptremove">
Apt-get package of TDengine can be uninstalled as below:
```bash
$ sudo apt-get remove tdengine
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
tdengine
0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded.
After this operation, 68.3 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 135625 files and directories currently installed.)
Removing tdengine (3.0.0.0) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem label="Uninstall Deb" value="debuninst"> <TabItem label="Uninstall Deb" value="debuninst">
Deb package of TDengine can be uninstalled as below: Deb package of TDengine can be uninstalled as below:
......
...@@ -10,7 +10,7 @@ One difference from the native connector is that the REST interface is stateless ...@@ -10,7 +10,7 @@ One difference from the native connector is that the REST interface is stateless
## Installation ## Installation
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol. The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol. The REST interface is provided by [taosAdapter](../taosadapter), to use REST interface you need to make sure `taosAdapter` is running properly.
## Verification ## Verification
......
--- ---
sidebar_position: 1
sidebar_label: C/C++ sidebar_label: C/C++
title: C/C++ Connector title: C/C++ Connector
--- ---
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 2
sidebar_label: Java sidebar_label: Java
title: TDengine Java Connector title: TDengine Java Connector
description: The TDengine Java Connector is implemented on the standard JDBC API and provides native and REST connectors. description: The TDengine Java Connector is implemented on the standard JDBC API and provides native and REST connectors.
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 4
sidebar_label: Go sidebar_label: Go
title: TDengine Go Connector title: TDengine Go Connector
--- ---
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust sidebar_label: Rust
title: TDengine Rust Connector title: TDengine Rust Connector
--- ---
......
--- ---
sidebar_position: 3
sidebar_label: Python sidebar_label: Python
title: TDengine Python Connector title: TDengine Python Connector
description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas." description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 6
sidebar_label: Node.js sidebar_label: Node.js
title: TDengine Node.js Connector title: TDengine Node.js Connector
--- ---
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 7
sidebar_label: C# sidebar_label: C#
title: C# Connector title: C# Connector
--- ---
......
--- ---
sidebar_position: 1 sidebar_label: PHP
sidebar_label: PHP (community contribution)
title: PHP Connector title: PHP Connector
--- ---
......
--- ---
title: 产品简介 title: 产品简介
description: 简要介绍 TDengine 的主要功能
toc_max_heading_level: 2 toc_max_heading_level: 2
--- ---
......
--- ---
sidebar_label: 基本概念
title: 数据模型和基本概念 title: 数据模型和基本概念
description: TDengine 的数据模型和基本概念
--- ---
为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 location 和分组 group ID 的静态属性. 其采集的数据类似如下的表格: 为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 location 和分组 group ID 的静态属性. 其采集的数据类似如下的表格:
...@@ -104,15 +106,15 @@ title: 数据模型和基本概念 ...@@ -104,15 +106,15 @@ title: 数据模型和基本概念
## 采集量 (Metric) ## 采集量 (Metric)
采集量是指传感器、设备或其他类型采集点采集的物理量,比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。 采集量是指传感器、设备或其他类型采集点采集的物理量,比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。智能电表示例中的电流、电压、相位就是采集量。
## 标签 (Label/Tag) ## 标签 (Label/Tag)
标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。 标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。智能电表示例中的location与groupId就是标签。
## 数据采集点 (Data Collection Point) ## 数据采集点 (Data Collection Point)
数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。 数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。智能电表示例中的d1001, d1002, d1003, d1004等就是数据采集点。
## 表 (Table) ## 表 (Table)
...@@ -131,13 +133,14 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表 ...@@ -131,13 +133,14 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表
对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一台汽车建立多张表。 对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一台汽车建立多张表。
## 超级表 (STable) ## 超级表 (STable)
由于一个数据采集点一张表,导致表的数量巨增,难以管理,而且应用经常需要做采集点之间的聚合操作,聚合的操作也变得复杂起来。为解决这个问题,TDengine 引入超级表(Super Table,简称为 STable)的概念。 由于一个数据采集点一张表,导致表的数量巨增,难以管理,而且应用经常需要做采集点之间的聚合操作,聚合的操作也变得复杂起来。为解决这个问题,TDengine 引入超级表(Super Table,简称为 STable)的概念。
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema,标签的数据类型可以是整数、浮点数、字符串,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。 超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema,标签的数据类型可以是整数、浮点数、字符串,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合** 在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**智能电表示例中,我们可以创建一个超级表meters.
## 子表 (Subtable) ## 子表 (Subtable)
...@@ -156,7 +159,9 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表 ...@@ -156,7 +159,9 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表
查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。 查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。
TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。 TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。在智能电表的示例中,我们可以通过超级表meters创建子表d1001, d1002, d1003, d1004等。
为了更好地理解超级与子表的关系,可以参考下面关于智能电表数据模型的示意图。 ![智能电表数据模型示意图](./supertable.webp)
## 库 (database) ## 库 (database)
......
--- ---
sidebar_label: Docker sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine title: 通过 Docker 快速体验 TDengine
description: 使用 Docker 快速体验 TDengine 的高效写入和查询
--- ---
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker,请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装. 本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker,请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
......
--- ---
sidebar_label: 安装包 sidebar_label: 安装包
title: 使用安装包立即开始 title: 使用安装包立即开始
description: 使用安装包快速体验 TDengine
--- ---
import Tabs from "@theme/Tabs"; import Tabs from "@theme/Tabs";
......
--- ---
title: 建立连接 title: 建立连接
description: "本节介绍如何使用连接器建立与 TDengine 的连接,给出连接器安装、连接的简单说明。" description: 使用连接器建立与 TDengine 的连接,以及连接器的安装和连接
--- ---
import Tabs from "@theme/Tabs"; import Tabs from "@theme/Tabs";
......
--- ---
sidebar_label: 数据建模
title: TDengine 数据建模 title: TDengine 数据建模
description: TDengine 中如何建立数据模型
--- ---
TDengine 采用类关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库、超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。 TDengine 采用类关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库、超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。
......
--- ---
sidebar_label: 写入数据
title: 写入数据 title: 写入数据
description: TDengine 的各种写入方式
--- ---
TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。 TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。
......
--- ---
sidebar_label: 查询数据
title: 查询数据 title: 查询数据
description: "主要查询功能,通过连接器执行同步查询和异步查询" description: "主要查询功能,通过连接器执行同步查询和异步查询"
--- ---
......
--- ---
title: 开发指南 title: 开发指南
sidebar_label: 开发指南
description: 让开发者能够快速上手的指南
--- ---
开发一个应用,如果你准备采用TDengine作为时序数据处理的工具,那么有如下几个事情要做: 开发一个应用,如果你准备采用TDengine作为时序数据处理的工具,那么有如下几个事情要做:
......
--- ---
title: REST API title: REST API
sidebar_label: REST API
description: 详细介绍 TDengine 提供的 RESTful API.
--- ---
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。 为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
...@@ -10,7 +12,7 @@ title: REST API ...@@ -10,7 +12,7 @@ title: REST API
## 安装 ## 安装
RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。 RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。TDengine 的 RESTful API 由 [taosAdapter](../../reference/taosadapter) 提供,在使用 RESTful API 之前需要确保 `taosAdapter` 正常运行。
## 验证 ## 验证
......
--- ---
sidebar_position: 1
sidebar_label: C/C++ sidebar_label: C/C++
title: C/C++ Connector title: C/C++ Connector
--- ---
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 2
sidebar_label: Java sidebar_label: Java
title: TDengine Java Connector title: TDengine Java Connector
description: TDengine Java 连接器基于标准 JDBC API 实现, 并提供原生连接与 REST连接两种连接器。 description: TDengine Java 连接器基于标准 JDBC API 实现, 并提供原生连接与 REST连接两种连接器。
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 5
sidebar_label: Rust sidebar_label: Rust
title: TDengine Rust Connector title: TDengine Rust Connector
--- ---
......
--- ---
sidebar_position: 3
sidebar_label: Python sidebar_label: Python
title: TDengine Python Connector title: TDengine Python Connector
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:tasos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas" description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:tasos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 6
sidebar_label: Node.js sidebar_label: Node.js
title: TDengine Node.js Connector title: TDengine Node.js Connector
--- ---
......
--- ---
toc_max_heading_level: 4 toc_max_heading_level: 4
sidebar_position: 7
sidebar_label: C# sidebar_label: C#
title: C# Connector title: C# Connector
--- ---
......
--- ---
sidebar_position: 1 sidebar_label: PHP
sidebar_label: PHP(社区贡献)
title: PHP Connector title: PHP Connector
--- ---
......
--- ---
sidebar_label: 错误码 sidebar_label: 错误码
title: TDengine C/C++ 连接器错误码 title: TDengine C/C++ 连接器错误码
description: C/C++ 连接器的错误码列表和详细说明
--- ---
本文中详细列举了在使用 TDengine C/C++ 连接器时客户端可能得到的错误码以及所要采取的相应动作。其它语言的连接器在使用原生连接方式时也会所得到的返回码返回给连接器的调用者。 本文中详细列举了在使用 TDengine C/C++ 连接器时客户端可能得到的错误码以及所要采取的相应动作。其它语言的连接器在使用原生连接方式时也会所得到的返回码返回给连接器的调用者。
......
--- ---
sidebar_label: 手动部署 sidebar_label: 手动部署
title: 集群部署和管理 title: 集群部署和管理
description: 使用命令行工具手动部署 TDengine 集群
--- ---
## 准备工作 ## 准备工作
......
--- ---
sidebar_label: Kubernetes sidebar_label: Kubernetes
title: 在 Kubernetes 上部署 TDengine 集群 title: 在 Kubernetes 上部署 TDengine 集群
description: 利用 Kubernetes 部署 TDengine 集群的详细指南
--- ---
作为面向云原生架构设计的时序数据库,TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。 作为面向云原生架构设计的时序数据库,TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
......
--- ---
sidebar_label: Helm sidebar_label: Helm
title: 使用 Helm 部署 TDengine 集群 title: 使用 Helm 部署 TDengine 集群
description: 使用 Helm 部署 TDengine 集群的详细指南
--- ---
Helm 是 Kubernetes 的包管理器,上一节使用 Kubernets 部署 TDengine 集群的操作已经足够简单,但 Helm 依然可以提供更强大的能力。 Helm 是 Kubernetes 的包管理器,上一节使用 Kubernets 部署 TDengine 集群的操作已经足够简单,但 Helm 依然可以提供更强大的能力。
...@@ -171,70 +172,19 @@ taoscfg: ...@@ -171,70 +172,19 @@ taoscfg:
TAOS_REPLICA: "1" TAOS_REPLICA: "1"
# number of days per DB file # TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
# TAOS_DAYS: "10" #TAOS_NUM_OF_RPC_THREADS: "2"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
# #
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data # TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4" #TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report # enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1" #TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds # time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30" #TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only # time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1" #TAOS_STATUS_INTERVAL: "1"
...@@ -245,37 +195,7 @@ taoscfg: ...@@ -245,37 +195,7 @@ taoscfg:
#TAOS_MIN_SLIDING_TIME: "10" #TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second # minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10" #TAOS_MIN_INTERVAL_TIME: "1"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option: # the compressed rpc message, option:
# -1 (no compression) # -1 (no compression)
...@@ -283,17 +203,8 @@ taoscfg: ...@@ -283,17 +203,8 @@ taoscfg:
# > 0 (rpc message body which larger than this value will be compressed) # > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1" #TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode # max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000" #TAOS_MAX_SHELL_CONNS: "50000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value # stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1" #TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
...@@ -313,21 +224,8 @@ taoscfg: ...@@ -313,21 +224,8 @@ taoscfg:
# enable/disable system monitor # enable/disable system monitor
#TAOS_MONITOR: "1" #TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log # enable/disable async log
#TAOS_ASYNC_LOG: "0" #TAOS_ASYNC_LOG: "1"
# #
# time of keeping log files, days # time of keeping log files, days
...@@ -344,25 +242,8 @@ taoscfg: ...@@ -344,25 +242,8 @@ taoscfg:
# debug flag for all log type, take effect when non-zero value\ # debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143" #TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash # generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1" #TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
``` ```
## 扩容 ## 扩容
......
--- ---
sidebar_label: 部署集群
title: 部署集群 title: 部署集群
description: 部署 TDengine 集群的多种方式
--- ---
TDengine 支持集群,提供水平扩展的能力。如果需要获得更高的处理能力,只需要多增加节点即可。TDengine 采用虚拟节点技术,将一个节点虚拟化为多个虚拟节点,以实现负载均衡。同时,TDengine可以将多个节点上的虚拟节点组成虚拟节点组,通过多副本机制,以保证供系统的高可用。TDengine的集群功能完全开源。 TDengine 支持集群,提供水平扩展的能力。如果需要获得更高的处理能力,只需要多增加节点即可。TDengine 采用虚拟节点技术,将一个节点虚拟化为多个虚拟节点,以实现负载均衡。同时,TDengine可以将多个节点上的虚拟节点组成虚拟节点组,通过多副本机制,以保证供系统的高可用。TDengine的集群功能完全开源。
......
...@@ -11,7 +11,7 @@ description: "TDengine 支持的数据类型: 时间戳、浮点型、JSON 类 ...@@ -11,7 +11,7 @@ description: "TDengine 支持的数据类型: 时间戳、浮点型、JSON 类
- 时间格式为 `YYYY-MM-DD HH:mm:ss.MS`,默认时间分辨率为毫秒。比如:`2017-08-12 18:25:58.128` - 时间格式为 `YYYY-MM-DD HH:mm:ss.MS`,默认时间分辨率为毫秒。比如:`2017-08-12 18:25:58.128`
- 内部函数 now 是客户端的当前时间 - 内部函数 now 是客户端的当前时间
- 插入记录时,如果时间戳为 now,插入数据时使用提交这条记录的客户端的当前时间 - 插入记录时,如果时间戳为 now,插入数据时使用提交这条记录的客户端的当前时间
- Epoch Time:时间戳也可以是一个长整数,表示从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的毫秒数(相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的微秒数;纳秒精度逻辑类似。) - Epoch Time:时间戳也可以是一个长整数,表示从 UTC 时间 1970-01-01 00:00:00 开始的毫秒数。相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从 UTC 时间 1970-01-01 00:00:00 开始的微秒数;纳秒精度逻辑类似。
- 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降采样操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n (自然月) 和 y (自然年)。 - 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降采样操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n (自然月) 和 y (自然年)。
TDengine 缺省的时间戳精度是毫秒,但通过在 `CREATE DATABASE` 时传递的 PRECISION 参数也可以支持微秒和纳秒。 TDengine 缺省的时间戳精度是毫秒,但通过在 `CREATE DATABASE` 时传递的 PRECISION 参数也可以支持微秒和纳秒。
......
--- ---
title: 表管理 title: 表管理
sidebar_label:
description: 对表的各种管理操作
--- ---
## 创建表 ## 创建表
......
--- ---
sidebar_label: 超级表管理 sidebar_label: 超级表管理
title: 超级表 STable 管理 title: 超级表 STable 管理
description: 对超级表的各种管理操作
--- ---
## 创建超级表 ## 创建超级表
......
--- ---
sidebar_label: 数据写入 sidebar_label: 数据写入
title: 数据写入 title: 数据写入
description: 写入数据的详细语法
--- ---
## 写入语法 ## 写入语法
......
--- ---
sidebar_label: 数据查询 sidebar_label: 数据查询
title: 数据查询 title: 数据查询
description: 查询数据的详细语法
--- ---
## 查询语法 ## 查询语法
......
--- ---
sidebar_label: 函数 sidebar_label: 函数
title: 函数 title: 函数
description: TDengine 支持的函数列表
toc_max_heading_level: 4 toc_max_heading_level: 4
--- ---
......
--- ---
sidebar_label: 时序数据特色查询 sidebar_label: 时序数据特色查询
title: 时序数据特色查询 title: 时序数据特色查询
description: TDengine 提供的时序数据特有的查询功能
--- ---
TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。 TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发。
......
--- ---
sidebar_label: 数据订阅 sidebar_label: 数据订阅
title: 数据订阅 title: 数据订阅
description: TDengine 消息队列提供的数据订阅功能
--- ---
TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。 TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。
......
--- ---
sidebar_label: 流式计算 sidebar_label: 流式计算
title: 流式计算 title: 流式计算
description: 流式计算的相关 SQL 的详细语法
--- ---
......
--- ---
sidebar_label: 运算符 sidebar_label: 运算符
title: 运算符 title: 运算符
description: TDengine 支持的所有运算符
--- ---
## 算术运算符 ## 算术运算符
......
--- ---
sidebar_label: JSON 类型使用说明 sidebar_label: JSON 类型使用说明
title: JSON 类型使用说明 title: JSON 类型使用说明
description: 对 JSON 类型如何使用的详细说明
--- ---
......
--- ---
title: 转义字符说明 title: 转义字符说明
sidebar_label: 转义字符
description: TDengine 中使用转义字符的详细规则
--- ---
## 转义字符表 ## 转义字符表
......
--- ---
sidebar_label: 命名与边界限制 sidebar_label: 命名与边界限制
title: 命名与边界限制 title: 命名与边界限制
description: 合法字符集和命名中的限制规则
--- ---
## 名称命名规则 ## 名称命名规则
......
--- ---
sidebar_label: 保留关键字 sidebar_label: 保留关键字
title: TDengine 保留关键字 title: TDengine 保留关键字
description: TDengine 保留关键字的详细列表
--- ---
## 保留关键字 ## 保留关键字
......
--- ---
sidebar_label: 集群管理 sidebar_label: 集群管理
title: 集群管理 title: 集群管理
description: 管理集群的 SQL 命令的详细解析
--- ---
组成 TDengine 集群的物理实体是 dnode (data node 的缩写),它是一个运行在操作系统之上的进程。在 dnode 中可以建立负责时序数据存储的 vnode (virtual node),在多节点集群环境下当某个数据库的 replica 为 3 时,该数据库中的每个 vgroup 由 3 个 vnode 组成;当数据库的 replica 为 1 时,该数据库中的每个 vgroup 由 1 个 vnode 组成。如果要想配置某个数据库为多副本,则集群中的 dnode 数量至少为 3。在 dnode 还可以创建 mnode (management node),单个集群中最多可以创建三个 mnode。在 TDengine 3.0.0.0 中为了支持存算分离,引入了一种新的逻辑节点 qnode (query node),qnode 和 vnode 既可以共存在一个 dnode 中,也可以完全分离在不同的 dnode 上。 组成 TDengine 集群的物理实体是 dnode (data node 的缩写),它是一个运行在操作系统之上的进程。在 dnode 中可以建立负责时序数据存储的 vnode (virtual node),在多节点集群环境下当某个数据库的 replica 为 3 时,该数据库中的每个 vgroup 由 3 个 vnode 组成;当数据库的 replica 为 1 时,该数据库中的每个 vgroup 由 1 个 vnode 组成。如果要想配置某个数据库为多副本,则集群中的 dnode 数量至少为 3。在 dnode 还可以创建 mnode (management node),单个集群中最多可以创建三个 mnode。在 TDengine 3.0.0.0 中为了支持存算分离,引入了一种新的逻辑节点 qnode (query node),qnode 和 vnode 既可以共存在一个 dnode 中,也可以完全分离在不同的 dnode 上。
......
--- ---
sidebar_label: 元数据 sidebar_label: 元数据
title: 存储元数据的 Information_Schema 数据库 title: 存储元数据的 Information_Schema 数据库
description: Information_Schema 数据库中存储了系统中所有的元数据信息
--- ---
TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES)所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点: TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES)所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点:
......
--- ---
sidebar_label: 统计数据 sidebar_label: 统计数据
title: 存储统计数据的 Performance_Schema 数据库 title: 存储统计数据的 Performance_Schema 数据库
description: Performance_Schema 数据库中存储了系统中的各种统计信息
--- ---
TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其中存储了与性能有关的统计数据。本节详细介绍其中的表和表结构。 TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其中存储了与性能有关的统计数据。本节详细介绍其中的表和表结构。
......
--- ---
sidebar_label: SHOW 命令 sidebar_label: SHOW 命令
title: 使用 SHOW 命令查看系统元数据 title: 使用 SHOW 命令查看系统元数据
description: SHOW 命令的完整列表
--- ---
除了使用 `select` 语句查询 `INFORMATION_SCHEMA` 数据库中的表获得系统中的各种元数据、系统信息和状态之外,也可以用 `SHOW` 命令来实现同样的目的 SHOW 命令可以用来获取简要的系统信息。若想获取系统中详细的各种元数据、系统信息和状态,请使用 select 语句查询 INFORMATION_SCHEMA 数据库中的表
## SHOW ACCOUNTS ## SHOW ACCOUNTS
......
--- ---
sidebar_label: 权限管理 sidebar_label: 权限管理
title: 权限管理 title: 权限管理
description: 企业版中才具有的权限管理功能
--- ---
本节讲述如何在 TDengine 中进行权限管理的相关操作。 本节讲述如何在 TDengine 中进行权限管理的相关操作。
......
--- ---
sidebar_label: 自定义函数 sidebar_label: 自定义函数
title: 用户自定义函数 title: 用户自定义函数
description: 使用 UDF 的详细指南
--- ---
除了 TDengine 的内置函数以外,用户还可以编写自己的函数逻辑并加入TDengine系统中。 除了 TDengine 的内置函数以外,用户还可以编写自己的函数逻辑并加入TDengine系统中。
......
--- ---
sidebar_label: 索引 sidebar_label: 索引
title: 使用索引 title: 使用索引
description: 索引功能的使用细节
--- ---
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。 TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。
......
--- ---
sidebar_label: 异常恢复 sidebar_label: 异常恢复
title: 异常恢复 title: 异常恢复
description: 如何终止出现问题的连接、查询和事务以使系统恢复正常
--- ---
在一个复杂的应用场景中,连接和查询任务等有可能进入一种错误状态或者耗时过长迟迟无法结束,此时需要有能够终止这些连接或任务的方法。 在一个复杂的应用场景中,连接和查询任务等有可能进入一种错误状态或者耗时过长迟迟无法结束,此时需要有能够终止这些连接或任务的方法。
......
--- ---
title: TDinsight - 基于Grafana的TDengine零依赖监控解决方案 title: TDinsight
sidebar_label: TDinsight sidebar_label: TDinsight
description: 基于Grafana的TDengine零依赖监控解决方案
--- ---
TDinsight 是使用监控数据库和 [Grafana] 对 TDengine 进行监控的解决方案。 TDinsight 是使用监控数据库和 [Grafana] 对 TDengine 进行监控的解决方案。
......
...@@ -698,122 +698,123 @@ charset 的有效值是 UTF-8。 ...@@ -698,122 +698,123 @@ charset 的有效值是 UTF-8。
| 45 | numOfVnodeFetchThreads | 否 | 是 | | 45 | numOfVnodeFetchThreads | 否 | 是 |
| 46 | numOfVnodeWriteThreads | 否 | 是 | | 46 | numOfVnodeWriteThreads | 否 | 是 |
| 47 | numOfVnodeSyncThreads | 否 | 是 | | 47 | numOfVnodeSyncThreads | 否 | 是 |
| 48 | numOfQnodeQueryThreads | 否 | 是 | | 48 | numOfVnodeRsmaThreads | 否 | 是 |
| 49 | numOfQnodeFetchThreads | 否 | 是 | | 49 | numOfQnodeQueryThreads | 否 | 是 |
| 50 | numOfSnodeSharedThreads | 否 | 是 | | 50 | numOfQnodeFetchThreads | 否 | 是 |
| 51 | numOfSnodeUniqueThreads | 否 | 是 | | 51 | numOfSnodeSharedThreads | 否 | 是 |
| 52 | rpcQueueMemoryAllowed | 否 | 是 | | 52 | numOfSnodeUniqueThreads | 否 | 是 |
| 53 | logDir | 是 | 是 | | 53 | rpcQueueMemoryAllowed | 否 | 是 |
| 54 | minimalLogDirGB | 是 | 是 | | 54 | logDir | 是 | 是 |
| 55 | numOfLogLines | 是 | 是 | | 55 | minimalLogDirGB | 是 | 是 |
| 56 | asyncLog | 是 | 是 | | 56 | numOfLogLines | 是 | 是 |
| 57 | logKeepDays | 是 | 是 | | 57 | asyncLog | 是 | 是 |
| 58 | debugFlag | 是 | 是 | | 58 | logKeepDays | 是 | 是 |
| 59 | tmrDebugFlag | 是 | 是 | | 59 | debugFlag | 是 | 是 |
| 60 | uDebugFlag | 是 | 是 | | 60 | tmrDebugFlag | 是 | 是 |
| 61 | rpcDebugFlag | 是 | 是 | | 61 | uDebugFlag | 是 | 是 |
| 62 | jniDebugFlag | 是 | 是 | | 62 | rpcDebugFlag | 是 | 是 |
| 63 | qDebugFlag | 是 | 是 | | 63 | jniDebugFlag | 是 | 是 |
| 64 | cDebugFlag | 是 | 是 | | 64 | qDebugFlag | 是 | 是 |
| 65 | dDebugFlag | 是 | 是 | | 65 | cDebugFlag | 是 | 是 |
| 66 | vDebugFlag | 是 | 是 | | 66 | dDebugFlag | 是 | 是 |
| 67 | mDebugFlag | 是 | 是 | | 67 | vDebugFlag | 是 | 是 |
| 68 | wDebugFlag | 是 | 是 | | 68 | mDebugFlag | 是 | 是 |
| 69 | sDebugFlag | 是 | 是 | | 69 | wDebugFlag | 是 | 是 |
| 70 | tsdbDebugFlag | 是 | 是 | | 70 | sDebugFlag | 是 | 是 |
| 71 | tqDebugFlag | 否 | 是 | | 71 | tsdbDebugFlag | 是 | 是 |
| 72 | fsDebugFlag | 是 | 是 | | 72 | tqDebugFlag | 否 | 是 |
| 73 | udfDebugFlag | 否 | 是 | | 73 | fsDebugFlag | 是 | 是 |
| 74 | smaDebugFlag | 否 | 是 | | 74 | udfDebugFlag | 否 | 是 |
| 75 | idxDebugFlag | 否 | 是 | | 75 | smaDebugFlag | 否 | 是 |
| 76 | tdbDebugFlag | 否 | 是 | | 76 | idxDebugFlag | 否 | 是 |
| 77 | metaDebugFlag | 否 | 是 | | 77 | tdbDebugFlag | 否 | 是 |
| 78 | timezone | 是 | 是 | | 78 | metaDebugFlag | 否 | 是 |
| 79 | locale | 是 | 是 | | 79 | timezone | 是 | 是 |
| 80 | charset | 是 | 是 | | 80 | locale | 是 | 是 |
| 81 | udf | 是 | 是 | | 81 | charset | 是 | 是 |
| 82 | enableCoreFile | 是 | 是 | | 82 | udf | 是 | 是 |
| 83 | arbitrator | 是 | 否 | | 83 | enableCoreFile | 是 | 是 |
| 84 | numOfThreadsPerCore | 是 | 否 | | 84 | arbitrator | 是 | 否 |
| 85 | numOfMnodes | 是 | 否 | | 85 | numOfThreadsPerCore | 是 | 否 |
| 86 | vnodeBak | 是 | 否 | | 86 | numOfMnodes | 是 | 否 |
| 87 | balance | 是 | 否 | | 87 | vnodeBak | 是 | 否 |
| 88 | balanceInterval | 是 | 否 | | 88 | balance | 是 | 否 |
| 89 | offlineThreshold | 是 | 否 | | 89 | balanceInterval | 是 | 否 |
| 90 | role | 是 | 否 | | 90 | offlineThreshold | 是 | 否 |
| 91 | dnodeNopLoop | 是 | 否 | | 91 | role | 是 | 否 |
| 92 | keepTimeOffset | 是 | 否 | | 92 | dnodeNopLoop | 是 | 否 |
| 93 | rpcTimer | 是 | 否 | | 93 | keepTimeOffset | 是 | 否 |
| 94 | rpcMaxTime | 是 | 否 | | 94 | rpcTimer | 是 | 否 |
| 95 | rpcForceTcp | 是 | 否 | | 95 | rpcMaxTime | 是 | 否 |
| 96 | tcpConnTimeout | 是 | 否 | | 96 | rpcForceTcp | 是 | 否 |
| 97 | syncCheckInterval | 是 | 否 | | 97 | tcpConnTimeout | 是 | 否 |
| 98 | maxTmrCtrl | 是 | 否 | | 98 | syncCheckInterval | 是 | 否 |
| 99 | monitorReplica | 是 | 否 | | 99 | maxTmrCtrl | 是 | 否 |
| 100 | smlTagNullName | 是 | 否 | | 100 | monitorReplica | 是 | 否 |
| 101 | keepColumnName | 是 | 否 | | 101 | smlTagNullName | 是 | 否 |
| 102 | ratioOfQueryCores | 是 | 否 | | 102 | keepColumnName | 是 | 否 |
| 103 | maxStreamCompDelay | 是 | 否 | | 103 | ratioOfQueryCores | 是 | 否 |
| 104 | maxFirstStreamCompDelay | 是 | 否 | | 104 | maxStreamCompDelay | 是 | 否 |
| 105 | retryStreamCompDelay | 是 | 否 | | 105 | maxFirstStreamCompDelay | 是 | 否 |
| 106 | streamCompDelayRatio | 是 | 否 | | 106 | retryStreamCompDelay | 是 | 否 |
| 107 | maxVgroupsPerDb | 是 | 否 | | 107 | streamCompDelayRatio | 是 | 否 |
| 108 | maxTablesPerVnode | 是 | 否 | | 108 | maxVgroupsPerDb | 是 | 否 |
| 109 | minTablesPerVnode | 是 | 否 | | 109 | maxTablesPerVnode | 是 | 否 |
| 110 | tableIncStepPerVnode | 是 | 否 | | 110 | minTablesPerVnode | 是 | 否 |
| 111 | cache | 是 | 否 | | 111 | tableIncStepPerVnode | 是 | 否 |
| 112 | blocks | 是 | 否 | | 112 | cache | 是 | 否 |
| 113 | days | 是 | 否 | | 113 | blocks | 是 | 否 |
| 114 | keep | 是 | 否 | | 114 | days | 是 | 否 |
| 115 | minRows | 是 | 否 | | 115 | keep | 是 | 否 |
| 116 | maxRows | 是 | 否 | | 116 | minRows | 是 | 否 |
| 117 | quorum | 是 | 否 | | 117 | maxRows | 是 | 否 |
| 118 | comp | 是 | 否 | | 118 | quorum | 是 | 否 |
| 119 | walLevel | 是 | 否 | | 119 | comp | 是 | 否 |
| 120 | fsync | 是 | 否 | | 120 | walLevel | 是 | 否 |
| 121 | replica | 是 | 否 | | 121 | fsync | 是 | 否 |
| 122 | partitions | 是 | 否 | | 122 | replica | 是 | 否 |
| 123 | quorum | 是 | 否 | | 123 | partitions | 是 | 否 |
| 124 | update | 是 | 否 | | 124 | quorum | 是 | 否 |
| 125 | cachelast | 是 | 否 | | 125 | update | 是 | 否 |
| 126 | maxSQLLength | 是 | 否 | | 126 | cachelast | 是 | 否 |
| 127 | maxWildCardsLength | 是 | 否 | | 127 | maxSQLLength | 是 | 否 |
| 128 | maxRegexStringLen | 是 | 否 | | 128 | maxWildCardsLength | 是 | 否 |
| 129 | maxNumOfOrderedRes | 是 | 否 | | 129 | maxRegexStringLen | 是 | 否 |
| 130 | maxConnections | 是 | 否 | | 130 | maxNumOfOrderedRes | 是 | 否 |
| 131 | mnodeEqualVnodeNum | 是 | 否 | | 131 | maxConnections | 是 | 否 |
| 132 | http | 是 | 否 | | 132 | mnodeEqualVnodeNum | 是 | 否 |
| 133 | httpEnableRecordSql | 是 | 否 | | 133 | http | 是 | 否 |
| 134 | httpMaxThreads | 是 | 否 | | 134 | httpEnableRecordSql | 是 | 否 |
| 135 | restfulRowLimit | 是 | 否 | | 135 | httpMaxThreads | 是 | 否 |
| 136 | httpDbNameMandatory | 是 | 否 | | 136 | restfulRowLimit | 是 | 否 |
| 137 | httpKeepAlive | 是 | 否 | | 137 | httpDbNameMandatory | 是 | 否 |
| 138 | enableRecordSql | 是 | 否 | | 138 | httpKeepAlive | 是 | 否 |
| 139 | maxBinaryDisplayWidth | 是 | 否 | | 139 | enableRecordSql | 是 | 否 |
| 140 | stream | 是 | 否 | | 140 | maxBinaryDisplayWidth | 是 | 否 |
| 141 | retrieveBlockingModel | 是 | 否 | | 141 | stream | 是 | 否 |
| 142 | tsdbMetaCompactRatio | 是 | 否 | | 142 | retrieveBlockingModel | 是 | 否 |
| 143 | defaultJSONStrType | 是 | 否 | | 143 | tsdbMetaCompactRatio | 是 | 否 |
| 144 | walFlushSize | 是 | 否 | | 144 | defaultJSONStrType | 是 | 否 |
| 145 | keepTimeOffset | 是 | 否 | | 145 | walFlushSize | 是 | 否 |
| 146 | flowctrl | 是 | 否 | | 146 | keepTimeOffset | 是 | 否 |
| 147 | slaveQuery | 是 | 否 | | 147 | flowctrl | 是 | 否 |
| 148 | adjustMaster | 是 | 否 | | 148 | slaveQuery | 是 | 否 |
| 149 | topicBinaryLen | 是 | 否 | | 149 | adjustMaster | 是 | 否 |
| 150 | telegrafUseFieldNum | 是 | 否 | | 150 | topicBinaryLen | 是 | 否 |
| 151 | deadLockKillQuery | 是 | 否 | | 151 | telegrafUseFieldNum | 是 | 否 |
| 152 | clientMerge | 是 | 否 | | 152 | deadLockKillQuery | 是 | 否 |
| 153 | sdbDebugFlag | 是 | 否 | | 153 | clientMerge | 是 | 否 |
| 154 | odbcDebugFlag | 是 | 否 | | 154 | sdbDebugFlag | 是 | 否 |
| 155 | httpDebugFlag | 是 | 否 | | 155 | odbcDebugFlag | 是 | 否 |
| 156 | monDebugFlag | 是 | 否 | | 156 | httpDebugFlag | 是 | 否 |
| 157 | cqDebugFlag | 是 | 否 | | 157 | monDebugFlag | 是 | 否 |
| 158 | shortcutFlag | 是 | 否 | | 158 | cqDebugFlag | 是 | 否 |
| 159 | probeSeconds | 是 | 否 | | 159 | shortcutFlag | 是 | 否 |
| 160 | probeKillSeconds | 是 | 否 | | 160 | probeSeconds | 是 | 否 |
| 161 | probeInterval | 是 | 否 | | 161 | probeKillSeconds | 是 | 否 |
| 162 | lossyColumns | 是 | 否 | | 162 | probeInterval | 是 | 否 |
| 163 | fPrecision | 是 | 否 | | 163 | lossyColumns | 是 | 否 |
| 164 | dPrecision | 是 | 否 | | 164 | fPrecision | 是 | 否 |
| 165 | maxRange | 是 | 否 | | 165 | dPrecision | 是 | 否 |
| 166 | range | 是 | 否 | | 166 | maxRange | 是 | 否 |
| 167 | range | 是 | 否 |
--- ---
title: 参考手册 title: 参考手册
description: TDengine 中的各种组件的详细说明
--- ---
参考手册是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。 参考手册是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。
......
...@@ -47,7 +47,23 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ ...@@ -47,7 +47,23 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
<Tabs> <Tabs>
<TabItem label="apt-get 卸载" value="aptremove"> <TabItem label="apt-get 卸载" value="aptremove">
内容 TBD 卸载命令如下:
```
$ sudo apt-get remove tdengine
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
tdengine
0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded.
After this operation, 68.3 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 135625 files and directories currently installed.)
Removing tdengine (3.0.0.0) ...
TDengine is removed successfully!
```
</TabItem> </TabItem>
<TabItem label="Deb 卸载" value="debuninst"> <TabItem label="Deb 卸载" value="debuninst">
...@@ -57,7 +73,7 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ ...@@ -57,7 +73,7 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
``` ```
$ sudo dpkg -r tdengine $ sudo dpkg -r tdengine
(Reading database ... 120119 files and directories currently installed.) (Reading database ... 120119 files and directories currently installed.)
Removing tdengine (3.0.0.10002) ... Removing tdengine (3.0.0.0) ...
TDengine is removed successfully! TDengine is removed successfully!
``` ```
......
--- ---
sidebar_label: 容量规划 sidebar_label: 容量规划
title: 容量规划 title: 容量规划
description: 如何规划一个 TDengine 集群所需的物理资源
--- ---
使用 TDengine 来搭建一个物联网大数据平台,计算资源、存储资源需要根据业务场景进行规划。下面分别讨论系统运行所需要的内存、CPU 以及硬盘空间。 使用 TDengine 来搭建一个物联网大数据平台,计算资源、存储资源需要根据业务场景进行规划。下面分别讨论系统运行所需要的内存、CPU 以及硬盘空间。
......
--- ---
title: 容错和灾备 title: 容错和灾备
sidebar_label: 容错和灾备
description: TDengine 的容错和灾备功能
--- ---
## 容错 ## 容错
......
--- ---
title: 数据导入 title: 数据导入
description: 如何导入外部数据到 TDengine
--- ---
TDengine 提供多种方便的数据导入功能,一种按脚本文件导入,一种按数据文件导入,一种是 taosdump 工具导入本身导出的文件。 TDengine 提供多种方便的数据导入功能,一种按脚本文件导入,一种按数据文件导入,一种是 taosdump 工具导入本身导出的文件。
......
--- ---
title: 数据导出 title: 数据导出
description: 如何导出 TDengine 中的数据
--- ---
为方便数据导出,TDengine 提供了两种导出方式,分别是按表导出和用 taosdump 导出。 为方便数据导出,TDengine 提供了两种导出方式,分别是按表导出和用 taosdump 导出。
......
--- ---
title: 系统监控 title: 系统监控
description: 监控 TDengine 的运行状态
--- ---
TDengine 通过 [taosKeeper](/reference/taosKeeper/) 将服务器的 CPU、内存、硬盘空间、带宽、请求数、磁盘读写速度等信息定时写入指定数据库。TDengine 还将重要的系统操作(比如登录、创建、删除数据库等)日志以及各种错误报警信息进行记录。系统管理员可以从 CLI 直接查看这个数据库,也可以在 WEB 通过图形化界面查看这些监测信息。 TDengine 通过 [taosKeeper](/reference/taosKeeper/) 将服务器的 CPU、内存、硬盘空间、带宽、请求数、磁盘读写速度等信息定时写入指定数据库。TDengine 还将重要的系统操作(比如登录、创建、删除数据库等)日志以及各种错误报警信息进行记录。系统管理员可以从 CLI 直接查看这个数据库,也可以在 WEB 通过图形化界面查看这些监测信息。
......
--- ---
title: 诊断及其他 title: 诊断及其他
description: 一些常见问题的诊断技巧
--- ---
## 网络连接诊断 ## 网络连接诊断
......
--- ---
sidebar_label: Grafana sidebar_label: Grafana
title: Grafana title: Grafana
description: 使用 Grafana 与 TDengine 的详细说明
--- ---
import Tabs from "@theme/Tabs"; import Tabs from "@theme/Tabs";
......
--- ---
sidebar_label: Prometheus sidebar_label: Prometheus
title: Prometheus title: Prometheus
description: 使用 Prometheus 访问 TDengine
--- ---
import Prometheus from "../14-reference/_prometheus.mdx" import Prometheus from "../14-reference/_prometheus.mdx"
......
--- ---
sidebar_label: Telegraf sidebar_label: Telegraf
title: Telegraf 写入 title: Telegraf 写入
description: 使用 Telegraf 向 TDengine 写入数据
--- ---
import Telegraf from "../14-reference/_telegraf.mdx" import Telegraf from "../14-reference/_telegraf.mdx"
......
--- ---
sidebar_label: collectd sidebar_label: collectd
title: collectd 写入 title: collectd 写入
description: 使用 collected 向 TDengine 写入数据
--- ---
import CollectD from "../14-reference/_collectd.mdx" import CollectD from "../14-reference/_collectd.mdx"
......
--- ---
sidebar_label: StatsD sidebar_label: StatsD
title: StatsD 直接写入 title: StatsD 直接写入
description: 使用 StatsD 向 TDengine 写入
--- ---
import StatsD from "../14-reference/_statsd.mdx" import StatsD from "../14-reference/_statsd.mdx"
......
--- ---
sidebar_label: icinga2 sidebar_label: icinga2
title: icinga2 写入 title: icinga2 写入
description: 使用 icinga2 写入 TDengine
--- ---
import Icinga2 from "../14-reference/_icinga2.mdx" import Icinga2 from "../14-reference/_icinga2.mdx"
......
--- ---
sidebar_label: TCollector sidebar_label: TCollector
title: TCollector 写入 title: TCollector 写入
description: 使用 TCollector 写入 TDengine
--- ---
import TCollector from "../14-reference/_tcollector.mdx" import TCollector from "../14-reference/_tcollector.mdx"
......
--- ---
sidebar_label: EMQX Broker sidebar_label: EMQX Broker
title: EMQX Broker 写入 title: EMQX Broker 写入
description: 使用 EMQX Broker 写入 TDengine
--- ---
MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx)是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQX Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。 MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx)是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQX Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。
......
--- ---
sidebar_label: HiveMQ Broker sidebar_label: HiveMQ Broker
title: HiveMQ Broker 写入 title: HiveMQ Broker 写入
description: 使用 HivMQ Broker 写入 TDengine
--- ---
[HiveMQ](https://www.hivemq.com/) 是一个提供免费个人版和企业版的 MQTT 代理,主要用于企业和新兴的机器到机器 M2M 通讯和内部传输,满足可伸缩性、易管理和安全特性。HiveMQ 提供了开源的插件开发包。可以通过 HiveMQ extension - TDengine 保存数据到 TDengine。详细使用方法请参考 [HiveMQ extension - TDengine 说明文档](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md) [HiveMQ](https://www.hivemq.com/) 是一个提供免费个人版和企业版的 MQTT 代理,主要用于企业和新兴的机器到机器 M2M 通讯和内部传输,满足可伸缩性、易管理和安全特性。HiveMQ 提供了开源的插件开发包。可以通过 HiveMQ extension - TDengine 保存数据到 TDengine。详细使用方法请参考 [HiveMQ extension - TDengine 说明文档](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README.md)
--- ---
sidebar_label: Kafka sidebar_label: Kafka
title: TDengine Kafka Connector 使用教程 title: TDengine Kafka Connector
description: 使用 TDengine Kafka Connector 的详细指南
--- ---
TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDengine Sink Connector。用户只需提供简单的配置文件,就可以将 Kafka 中指定 topic 的数据(批量或实时)同步到 TDengine, 或将 TDengine 中指定数据库的数据(批量或实时)同步到 Kafka。 TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDengine Sink Connector。用户只需提供简单的配置文件,就可以将 Kafka 中指定 topic 的数据(批量或实时)同步到 TDengine, 或将 TDengine 中指定数据库的数据(批量或实时)同步到 Kafka。
......
--- ---
sidebar_label: 整体架构 sidebar_label: 整体架构
title: 整体架构 title: 整体架构
description: TDengine 架构设计,包括:集群、存储、缓存与持久化、数据备份、多级存储等
--- ---
## 集群与基本逻辑单元 ## 集群与基本逻辑单元
......
--- ---
title: 高可用 title: 高可用
description: TDengine 的高可用设计
--- ---
## Vnode 的高可用性 ## Vnode 的高可用性
......
--- ---
title: 负载均衡 title: 负载均衡
description: TDengine 的负载均衡设计
--- ---
TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TDengine 采用 Hash 一致性算法将一个数据库中的所有表和子表的数据均衡分散在属于该数据库的所有 vgroup 中,每张表或子表只能由一个 vgroup 处理,一个 vgroup 可能负责处理多个表或子表。 TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TDengine 采用 Hash 一致性算法将一个数据库中的所有表和子表的数据均衡分散在属于该数据库的所有 vgroup 中,每张表或子表只能由一个 vgroup 处理,一个 vgroup 可能负责处理多个表或子表。
...@@ -7,7 +8,7 @@ TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TD ...@@ -7,7 +8,7 @@ TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TD
创建数据库时可以指定其中的 vgroup 的数量: 创建数据库时可以指定其中的 vgroup 的数量:
```sql ```sql
create database db0 vgroups 100; create database db0 vgroups 20;
``` ```
如何指定合适的 vgroup 的数量,这取决于系统资源。假定系统中只计划建立一个数据库,则 vgroup 数量由集群中所有 dnode 所能使用的资源决定。原则上可用的 CPU 和 Memory 越多,可建立的 vgroup 也越多。但也要考虑到磁盘性能,过多的 vgroup 在磁盘性能达到上限后反而会拖累整个系统的性能。假如系统中会建立多个数据库,则多个数据库的 vgroup 之和取决于系统中可用资源的数量。要综合考虑多个数据库之间表的数量、写入频率、数据量等多个因素在多个数据库之间分配 vgroup。实际中建议首先根据系统资源配置选择一个初始的 vgroup 数量,比如 CPU 总核数的 2 倍,以此为起点通过测试找到最佳的 vgroup 数量配置,此为系统中的 vgroup 总数。如果有多个数据库的话,再根据各个数据库的表数和数据量对 vgroup 进行分配。 如何指定合适的 vgroup 的数量,这取决于系统资源。假定系统中只计划建立一个数据库,则 vgroup 数量由集群中所有 dnode 所能使用的资源决定。原则上可用的 CPU 和 Memory 越多,可建立的 vgroup 也越多。但也要考虑到磁盘性能,过多的 vgroup 在磁盘性能达到上限后反而会拖累整个系统的性能。假如系统中会建立多个数据库,则多个数据库的 vgroup 之和取决于系统中可用资源的数量。要综合考虑多个数据库之间表的数量、写入频率、数据量等多个因素在多个数据库之间分配 vgroup。实际中建议首先根据系统资源配置选择一个初始的 vgroup 数量,比如 CPU 总核数的 2 倍,以此为起点通过测试找到最佳的 vgroup 数量配置,此为系统中的 vgroup 总数。如果有多个数据库的话,再根据各个数据库的表数和数据量对 vgroup 进行分配。
......
--- ---
title: 技术内幕 title: 技术内幕
description: TDengine 的内部设计
--- ---
```mdx-code-block ```mdx-code-block
......
--- ---
sidebar_label: TDengine + Telegraf + Grafana sidebar_label: TDengine + Telegraf + Grafana
title: 使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维展示系统 title: TDengine + Telegraf + Grafana
description: 使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维展示系统
--- ---
## 背景介绍 ## 背景介绍
......
--- ---
sidebar_label: TDengine + collectd/StatsD + Grafana sidebar_label: TDengine + collectd/StatsD + Grafana
title: 使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统 title: TDengine + collectd/StatsD + Grafana
description: 使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维监控系统
--- ---
## 背景介绍 ## 背景介绍
......
--- ---
title: 应用实践 title: 应用实践
description: TDengine 配合其它开源组件的一些应用示例
--- ---
```mdx-code-block ```mdx-code-block
......
--- ---
title: 常见问题及反馈 title: 常见问题及反馈
description: 一些常见问题的解决方法汇总
--- ---
## 问题反馈 ## 问题反馈
......
--- ---
title: FAQ 及其他 title: FAQ 及其他
description: 用户经常遇到的问题
--- ---
```mdx-code-block ```mdx-code-block
......
--- ---
sidebar_label: TDengine 发布历史 sidebar_label: TDengine 发布历史
title: TDengine 发布历史 title: TDengine 发布历史
description: TDengine 发布历史、Release Notes 及下载链接
--- ---
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
...@@ -9,7 +10,7 @@ import Release from "/components/ReleaseV3"; ...@@ -9,7 +10,7 @@ import Release from "/components/ReleaseV3";
<Release type="tdengine" version="3.0.0.1" /> <Release type="tdengine" version="3.0.0.1" />
## 3.0.0.0 <!-- ## 3.0.0.0
<Release type="tdengine" version="3.0.0.0" /> <Release type="tdengine" version="3.0.0.0" /> -->
--- ---
sidebar_label: taosTools 发布历史 sidebar_label: taosTools 发布历史
title: taosTools 发布历史 title: taosTools 发布历史
description: taosTools 的发布历史、Release Notes 和下载链接
--- ---
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
......
...@@ -96,7 +96,7 @@ int32_t create_stream() { ...@@ -96,7 +96,7 @@ int32_t create_stream() {
taos_free_result(pRes); taos_free_result(pRes);
pRes = taos_query(pConn, pRes = taos_query(pConn,
"create stream stream1 trigger at_once watermark 10s into outstb as select _wstart start, k from st1 partition by tbname state_window(k)"); "create stream stream1 trigger at_once watermark 10s into outstb as select _wstart start, avg(k) from st1 partition by tbname interval(10s)");
if (taos_errno(pRes) != 0) { if (taos_errno(pRes) != 0) {
printf("failed to create stream stream1, reason:%s\n", taos_errstr(pRes)); printf("failed to create stream stream1, reason:%s\n", taos_errstr(pRes));
return -1; return -1;
......
...@@ -22,27 +22,27 @@ extern "C" { ...@@ -22,27 +22,27 @@ extern "C" {
#ifndef TDENGINE_SYSTABLE_H #ifndef TDENGINE_SYSTABLE_H
#define TDENGINE_SYSTABLE_H #define TDENGINE_SYSTABLE_H
#define TSDB_INFORMATION_SCHEMA_DB "information_schema" #define TSDB_INFORMATION_SCHEMA_DB "information_schema"
#define TSDB_INS_TABLE_DNODES "ins_dnodes" #define TSDB_INS_TABLE_DNODES "ins_dnodes"
#define TSDB_INS_TABLE_MNODES "ins_mnodes" #define TSDB_INS_TABLE_MNODES "ins_mnodes"
#define TSDB_INS_TABLE_MODULES "ins_modules" #define TSDB_INS_TABLE_MODULES "ins_modules"
#define TSDB_INS_TABLE_QNODES "ins_qnodes" #define TSDB_INS_TABLE_QNODES "ins_qnodes"
#define TSDB_INS_TABLE_BNODES "ins_bnodes" #define TSDB_INS_TABLE_BNODES "ins_bnodes"
#define TSDB_INS_TABLE_SNODES "ins_snodes" #define TSDB_INS_TABLE_SNODES "ins_snodes"
#define TSDB_INS_TABLE_CLUSTER "ins_cluster" #define TSDB_INS_TABLE_CLUSTER "ins_cluster"
#define TSDB_INS_TABLE_DATABASES "ins_databases" #define TSDB_INS_TABLE_DATABASES "ins_databases"
#define TSDB_INS_TABLE_FUNCTIONS "ins_functions" #define TSDB_INS_TABLE_FUNCTIONS "ins_functions"
#define TSDB_INS_TABLE_INDEXES "ins_indexes" #define TSDB_INS_TABLE_INDEXES "ins_indexes"
#define TSDB_INS_TABLE_STABLES "ins_stables" #define TSDB_INS_TABLE_STABLES "ins_stables"
#define TSDB_INS_TABLE_TABLES "ins_tables" #define TSDB_INS_TABLE_TABLES "ins_tables"
#define TSDB_INS_TABLE_TAGS "ins_tags" #define TSDB_INS_TABLE_TAGS "ins_tags"
#define TSDB_INS_TABLE_TABLE_DISTRIBUTED "ins_table_distributed" #define TSDB_INS_TABLE_TABLE_DISTRIBUTED "ins_table_distributed"
#define TSDB_INS_TABLE_USERS "ins_users" #define TSDB_INS_TABLE_USERS "ins_users"
#define TSDB_INS_TABLE_LICENCES "ins_grants" #define TSDB_INS_TABLE_LICENCES "ins_grants"
#define TSDB_INS_TABLE_VGROUPS "ins_vgroups" #define TSDB_INS_TABLE_VGROUPS "ins_vgroups"
#define TSDB_INS_TABLE_VNODES "ins_vnodes" #define TSDB_INS_TABLE_VNODES "ins_vnodes"
#define TSDB_INS_TABLE_CONFIGS "ins_configs" #define TSDB_INS_TABLE_CONFIGS "ins_configs"
#define TSDB_INS_TABLE_DNODE_VARIABLES "ins_dnode_variables" #define TSDB_INS_TABLE_DNODE_VARIABLES "ins_dnode_variables"
#define TSDB_PERFORMANCE_SCHEMA_DB "performance_schema" #define TSDB_PERFORMANCE_SCHEMA_DB "performance_schema"
#define TSDB_PERFS_TABLE_SMAS "perf_smas" #define TSDB_PERFS_TABLE_SMAS "perf_smas"
...@@ -60,16 +60,20 @@ typedef struct SSysDbTableSchema { ...@@ -60,16 +60,20 @@ typedef struct SSysDbTableSchema {
const char* name; const char* name;
const int32_t type; const int32_t type;
const int32_t bytes; const int32_t bytes;
const bool sysInfo;
} SSysDbTableSchema; } SSysDbTableSchema;
typedef struct SSysTableMeta { typedef struct SSysTableMeta {
const char* name; const char* name;
const SSysDbTableSchema* schema; const SSysDbTableSchema* schema;
const int32_t colNum; const int32_t colNum;
const bool sysInfo;
} SSysTableMeta; } SSysTableMeta;
void getInfosDbMeta(const SSysTableMeta** pInfosTableMeta, size_t* size); void getInfosDbMeta(const SSysTableMeta** pInfosTableMeta, size_t* size);
void getPerfDbMeta(const SSysTableMeta** pPerfsTableMeta, size_t* size); void getPerfDbMeta(const SSysTableMeta** pPerfsTableMeta, size_t* size);
void getVisibleInfosTablesNum(bool sysInfo, size_t* size);
bool invisibleColumn(bool sysInfo, int8_t tableType, int8_t flags);
#ifdef __cplusplus #ifdef __cplusplus
} }
......
...@@ -44,6 +44,30 @@ enum { ...@@ -44,6 +44,30 @@ enum {
) )
// clang-format on // clang-format on
typedef struct {
TSKEY ts;
uint64_t groupId;
} SWinKey;
static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) {
SWinKey* pWin1 = (SWinKey*)pKey1;
SWinKey* pWin2 = (SWinKey*)pKey2;
if (pWin1->groupId > pWin2->groupId) {
return 1;
} else if (pWin1->groupId < pWin2->groupId) {
return -1;
}
if (pWin1->ts > pWin2->ts) {
return 1;
} else if (pWin1->ts < pWin2->ts) {
return -1;
}
return 0;
}
enum { enum {
TMQ_MSG_TYPE__DUMMY = 0, TMQ_MSG_TYPE__DUMMY = 0,
TMQ_MSG_TYPE__POLL_RSP, TMQ_MSG_TYPE__POLL_RSP,
...@@ -182,7 +206,7 @@ typedef struct SColumn { ...@@ -182,7 +206,7 @@ typedef struct SColumn {
int16_t slotId; int16_t slotId;
char name[TSDB_COL_NAME_LEN]; char name[TSDB_COL_NAME_LEN];
int8_t flag; // column type: normal column, tag, or user-input column (integer/float/string) int16_t colType; // column type: normal column, tag, or window column
int16_t type; int16_t type;
int32_t bytes; int32_t bytes;
uint8_t precision; uint8_t precision;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册