提交 a929aa22 编写于 作者: dengyihao's avatar dengyihao

merge 3.0

......@@ -37,6 +37,21 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
SET(TD_LINUX_32 TRUE)
ENDIF ()
EXECUTE_PROCESS(COMMAND chmod 777 ${CMAKE_CURRENT_LIST_DIR}/../packaging/tools/get_os.sh)
EXECUTE_PROCESS(COMMAND readlink /bin/sh OUTPUT_VARIABLE SHELL_LINK)
MESSAGE(STATUS "The shell is: " ${SHELL_LINK})
IF (${SHELL_LINK} MATCHES "dash")
EXECUTE_PROCESS(COMMAND ${CMAKE_CURRENT_LIST_DIR}/../packaging/tools/get_os.sh "" OUTPUT_VARIABLE TD_OS_INFO)
ELSE ()
EXECUTE_PROCESS(COMMAND sh ${CMAKE_CURRENT_LIST_DIR}/../packaging/tools/get_os.sh "" OUTPUT_VARIABLE TD_OS_INFO)
ENDIF ()
MESSAGE(STATUS "The current OS is " ${TD_OS_INFO})
IF (${TD_OS_INFO} MATCHES "Alpine")
SET(TD_ALPINE TRUE)
ADD_DEFINITIONS("-D_ALPINE")
ENDIF ()
ELSEIF (${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
SET(TD_DARWIN TRUE)
......
......@@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.0.2.4")
SET(TD_VER_NUMBER "3.0.2.5")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
......
......@@ -2,7 +2,7 @@
# taosadapter
ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG 213f8b3
GIT_TAG db6c843
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE
......
......@@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 5c53cc8
GIT_TAG 05b61e0
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE
......
......@@ -2,7 +2,7 @@
# taosws-rs
ExternalProject_Add(taosws-rs
GIT_REPOSITORY https://github.com/taosdata/taos-connector-rust.git
GIT_TAG f406d51
GIT_TAG main
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosws-rs"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE
......
---
title: TDengine Documentation
sidebar_label: Documentation Home
description: This website contains the user manuals for TDengine, an open-source, cloud-native time-series database optimized for IoT, Connected Cars, and Industrial IoT.
slug: /
---
......
---
title: Introduction
description: This document introduces the major features, competitive advantages, typical use cases, and benchmarks of TDengine.
toc_max_heading_level: 2
---
......
---
title: Concepts
description: This document describes the basic concepts of TDengine, including the supertable.
---
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase; 2. There are multiple smart meters; 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
......
---
sidebar_label: Docker
title: Quick Install on Docker
sidebar_label: Docker
description: This document describes how to install TDengine in a Docker container and perform queries and inserts.
---
This document describes how to install TDengine in a Docker container and perform queries and inserts.
......
---
sidebar_label: Package
title: Quick Install from Package
sidebar_label: Package
description: This document describes how to install TDengine on Linux, Windows, and macOS and perform queries and inserts.
---
import Tabs from "@theme/Tabs";
......
---
title: Get Started
description: This article describes how to install TDengine and test its performance.
description: This document describes how to install TDengine on various platforms.
---
import GitHubSVG from './github.svg'
......
---
sidebar_label: Connect
title: Connect to TDengine
description: "How to establish connections to TDengine and how to install and use TDengine connectors."
sidebar_label: Connect
description: This document describes how to establish connections to TDengine and how to install and use TDengine connectors.
---
import Tabs from "@theme/Tabs";
......
---
title: Data Model
description: This document describes the data model of TDengine.
---
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the [STable](/concept/#super-table-stable) (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
......
---
title: Insert Using SQL
description: This document describes how to insert data into TDengine using SQL.
---
import Tabs from "@theme/Tabs";
......
---
title: Write from Kafka
description: This document describes how to insert data into TDengine using Kafka.
---
import Tabs from "@theme/Tabs";
......
---
sidebar_label: InfluxDB Line Protocol
title: InfluxDB Line Protocol
sidebar_label: InfluxDB Line Protocol
description: This document describes how to insert data into TDengine using the InfluxDB Line Protocol.
---
import Tabs from "@theme/Tabs";
......@@ -38,7 +39,7 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be created automatically. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3)
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3, smlDataFormat is discarded since 3.0.3.0)
:::
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
......
---
sidebar_label: OpenTSDB Line Protocol
title: OpenTSDB Line Protocol
sidebar_label: OpenTSDB Line Protocol
description: This document describes how to insert data into TDengine using the OpenTSDB Line Protocol.
---
import Tabs from "@theme/Tabs";
......
---
sidebar_label: OpenTSDB JSON Protocol
title: OpenTSDB JSON Protocol
sidebar_label: OpenTSDB JSON Protocol
description: This document describes how to insert data into TDengine using the OpenTSDB JSON protocol.
---
import Tabs from "@theme/Tabs";
......
---
sidebar_label: High Performance Writing
title: High Performance Writing
sidebar_label: High Performance Writing
description: This document describes how to achieve high performance when writing data into TDengine.
---
import Tabs from "@theme/Tabs";
......
......@@ -53,8 +53,69 @@ for p in ps:
In addition to python's built-in multithreading and multiprocessing library, we can also use the third-party library gunicorn.
### Examples
### examples
<details>
<summary>kafka_example_perform</summary>
`kafka_example_perform` is the entry point of the examples.
```py
{{#include docs/examples/python/kafka_example_perform.py}}
```
</details>
<details>
<summary>kafka_example_common</summary>
`kafka_example_common` is the common code of the examples.
```py
{{#include docs/examples/python/kafka_example.py}}
{{#include docs/examples/python/kafka_example_common.py}}
```
</details>
<details>
<summary>kafka_example_producer</summary>
`kafka_example_producer` is `producer`, which is responsible for generating test data and sending it to kafka.
```py
{{#include docs/examples/python/kafka_example_producer.py}}
```
</details>
<details>
<summary>kafka_example_consumer</summary>
`kafka_example_consumer` is `consumer`,which is responsible for consuming data from kafka and writing it to TDengine.
```py
{{#include docs/examples/python/kafka_example_consumer.py}}
```
</details>
### execute Python examples
<details>
<summary>execute Python examples</summary>
1. install and start up `kafka`
2. install python3 and pip
3. install `taospy` by pip
4. install `kafka-python` by pip
5. execute this example
The entry point of this example is `kafka_example_perform.py`. For more information about usage, please use `--help` command.
```
python3 kafka_example_perform.py --help
```
For example, the following command is creating 100 sub-table and inserting 20000 data for each table and the kafka max poll is 100 and 1 thread and 1 process per thread.
```
python3 kafka_example_perform.py -table-count=100 -table-items=20000 -max-poll=100 -threads=1 -processes=1
```
</details>
---
title: Insert Data
description: This document describes how to insert data into TDengine.
---
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
......
---
title: Query Data
description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
description: This document describes how to query data in TDengine and how to perform synchronous and asynchronous queries using connectors.
---
import Tabs from "@theme/Tabs";
......
---
sidebar_label: Stream Processing
description: "The TDengine stream processing engine combines data inserts, preprocessing, analytics, real-time computation, and alerting into a single component."
title: Stream Processing
sidebar_label: Stream Processing
description: This document describes the stream processing component of TDengine.
---
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. In a traditional time-series solution, this generally requires the deployment of stream processing systems such as Kafka or Flink. However, the complexity of such systems increases the cost of development and maintenance.
......
---
sidebar_label: Caching
title: Caching
description: "This document describes the caching component of TDengine."
sidebar_label: Caching
description: This document describes the caching component of TDengine.
---
TDengine uses various kinds of caching techniques to efficiently write and query data. This document describes the caching component of TDengine.
......
---
sidebar_label: UDF
title: User-Defined Functions (UDF)
description: "You can define your own scalar and aggregate functions to expand the query capabilities of TDengine."
sidebar_label: UDF
description: This document describes how to create user-defined functions (UDF), your own scalar and aggregate functions that can expand the query capabilities of TDengine.
---
The built-in functions of TDengine may not be sufficient for the use cases of every application. In this case, you can define custom functions for use in TDengine queries. These are known as user-defined functions (UDF). A user-defined function takes one column of data or the result of a subquery as its input.
......
---
title: Developer Guide
description: This document describes how to use the various components of TDengine from a developer's perspective.
---
Before creating an application to process time-series data with TDengine, consider the following:
......
---
sidebar_label: Manual Deployment
title: Manual Deployment and Management
sidebar_label: Manual Deployment
description: This document describes how to deploy TDengine on a server.
---
## Prerequisites
......
---
sidebar_label: Kubernetes
title: Deploying a TDengine Cluster in Kubernetes
sidebar_label: Kubernetes
description: This document describes how to deploy TDengine on Kubernetes.
---
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
......
---
sidebar_label: Helm
title: Use Helm to deploy TDengine
sidebar_label: Helm
description: This document describes how to deploy TDengine on Kubernetes by using Helm.
---
Helm is a package manager for Kubernetes that can provide more capabilities in deploying on Kubernetes.
......
---
title: Deployment
description: This document describes how to deploy a TDengine cluster on a server, on Kubernetes, and by using Helm.
---
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
......
---
sidebar_label: Data Types
title: Data Types
description: 'TDengine supports a variety of data types including timestamp, float, JSON and many others.'
sidebar_label: Data Types
description: This document describes the data types that TDengine supports.
---
## Timestamp
......
---
sidebar_label: Database
title: Database
description: "create and drop database, show or change database parameters"
sidebar_label: Database
description: This document describes how to create and perform operations on databases.
---
## Create a Database
......@@ -58,7 +58,7 @@ database_option: {
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default.
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. The Enterprise Edition supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; the Community Edition does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
......
---
title: Table
description: This document describes how to create and perform operations on standard tables and subtables.
---
## Create Table
......
---
sidebar_label: Supertable
title: Supertable
sidebar_label: Supertable
description: This document describes how to create and perform operations on supertables.
---
## Create a Supertable
......
---
sidebar_label: Insert
title: Insert
sidebar_label: Insert
description: This document describes how to insert data into TDengine.
---
## Syntax
......@@ -27,7 +28,7 @@ INSERT INTO tb_name [(field1_name, ...)] subquery
2. The precision of a timestamp depends on its format. The precision configured for the database affects only timestamps that are inserted as long integers (UNIX time). Timestamps inserted as date and time strings are not affected. As an example, the timestamp 2021-07-13 16:16:48 is equivalent to 1626164208 in UNIX time. This UNIX time is modified to 1626164208000 for databases with millisecond precision, 1626164208000000 for databases with microsecond precision, and 1626164208000000000 for databases with nanosecond precision.
3. If you want to insert multiple rows simultaneously, do not use the NOW function in the timestamp. Using the NOW function in this situation will cause multiple rows to have the same timestamp and prevent them from being stored correctly. This is because the NOW function obtains the current time on the client, and multiple instances of NOW in a single statement will return the same time.
The earliest timestamp that you can use when inserting data is equal to the current time on the server minus the value of the KEEP parameter. The latest timestamp that you can use when inserting data is equal to the current time on the server plus the value of the DURATION parameter. You can configure the KEEP and DURATION parameters when you create a database. The default values are 3650 days for the KEEP parameter and 10 days for the DURATION parameter.
The earliest timestamp that you can use when inserting data is equal to the current time on the server minus the value of the KEEP parameter (You can configure the KEEP parameter when you create a database and the default value is 3650 days). The latest timestamp you can use when inserting data depends on the PRECISION parameter (You can configure the PRECISION parameter when you create a database, ms means milliseconds, us means microseconds, ns means nanoseconds, and the default value is milliseconds). If the timestamp precision is milliseconds or microseconds, the latest timestamp is the Unix epoch (January 1st, 1970 at 00:00:00.000 UTC) plus 1000 years, that is, January 1st, 2970 at 00:00:00.000 UTC; If the timestamp precision is nanoseconds, the latest timestamp is the Unix epoch plus 292 years, that is, January 1st, 2262 at 00:00:00.000000000 UTC.
**Syntax**
......
---
sidebar_label: Select
title: Select
sidebar_label: Select
description: This document describes how to query data in TDengine.
---
## Syntax
......@@ -12,6 +13,7 @@ SELECT [DISTINCT] select_list
from_clause
[WHERE condition]
[partition_by_clause]
[interp_clause]
[window_clause]
[group_by_clause]
[order_by_clasue]
......@@ -52,8 +54,11 @@ window_clause: {
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
interp_clause:
RANGE(ts_val, ts_val), EVERY(every_val), FILL(fill_mod_and_val)
partition_by_clause:
PARTITION BY expr [, expr] ...
PARTITION BY expr [, expr] ...
group_by_clause:
GROUP BY expr [, expr] ... HAVING condition
......
---
sidebar_label: Delete Data
description: "Delete data from table or Stable"
title: Delete Data
sidebar_label: Delete Data
description: This document describes how to delete data from TDengine.
---
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure.
......
---
sidebar_label: Functions
title: Functions
sidebar_label: Functions
description: This document describes the standard SQL functions available in TDengine.
toc_max_heading_level: 4
---
......@@ -872,9 +873,9 @@ INTERP(expr)
- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
- `INTERP` must be used along with `RANGE`, `EVERY`, `FILL` keywords.
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. The parameter `EVERY` must be an integer, with no quotes, with a time unit of: b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
- Interpolation is performed based on `FILL` parameter.
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1 <= timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY(time_unit)`. Starting from timestamp1, one interpolation is performed for every time interval specified `time_unit` parameter. The parameter `time_unit` must be an integer, with no quotes, with a time unit of: a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
- Interpolation is performed based on `FILL` parameter. For more information about FILL clause, see [FILL Clause](../distinguished/#fill-clause).
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.1.4).
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.2.3).
......
---
sidebar_label: Time-Series Extensions
title: Time-Series Extensions
sidebar_label: Time-Series Extensions
description: This document describes the extended functions specific to time-series data processing available in TDengine.
---
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
......
---
sidebar_label: Data Subscription
title: Data Subscription
sidebar_label: Data Subscription
description: This document describes the SQL statements related to the data subscription component of TDengine.
---
The information in this document is related to the TDengine data subscription feature.
......
---
sidebar_label: Stream Processing
title: Stream Processing
sidebar_label: Stream Processing
description: This document describes the SQL statements related to the stream processing component of TDengine.
---
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs.
......
---
sidebar_label: Operators
title: Operators
sidebar_label: Operators
description: This document describes the SQL operators available in TDengine.
---
## Arithmetic Operators
......
---
sidebar_label: JSON Type
title: JSON Type
sidebar_label: JSON Type
description: This document describes the JSON data type in TDengine.
---
......
---
title: Escape Characters
description: This document describes the usage of escape characters in TDengine.
---
## Escape Characters
......
---
sidebar_label: Name and Size Limits
title: Name and Size Limits
sidebar_label: Name and Size Limits
description: This document describes the name and size limits in TDengine.
---
## Naming Rules
......
---
sidebar_label: Reserved Keywords
title: Reserved Keywords
sidebar_label: Reserved Keywords
description: This document describes the reserved keywords in TDengine that cannot be used in object names.
---
## Keyword List
......
---
sidebar_label: Cluster
title: Cluster
sidebar_label: Cluster
description: This document describes the SQL statements related to cluster management in TDengine.
---
The physical entities that form TDengine clusters are known as data nodes (dnodes). Each dnode is a process running on the operating system of the physical machine. Dnodes can contain virtual nodes (vnodes), which store time-series data. Virtual nodes are formed into vgroups, which have 1 or 3 vnodes depending on the replica setting. If you want to enable replication on your cluster, it must contain at least three nodes. Dnodes can also contain management nodes (mnodes). Each cluster has up to three mnodes. Finally, dnodes can contain query nodes (qnodes), which compute time-series data, thus separating compute from storage. A single dnode can contain a vnode, qnode, and mnode.
......
---
sidebar_label: Metadata
title: Information_Schema Database
sidebar_label: Metadata
description: This document describes how to use the INFORMATION_SCHEMA database in TDengine.
---
TDengine includes a built-in database named `INFORMATION_SCHEMA` to provide access to database metadata, system information, and status information. This information includes database names, table names, and currently running SQL statements. All information related to TDengine maintenance is stored in this database. It contains several read-only tables. These tables are more accurately described as views, and they do not correspond to specific files. You can query these tables but cannot write data to them. The INFORMATION_SCHEMA database is intended to provide a unified method for SHOW commands to access data. However, using SELECT ... FROM INFORMATION_SCHEMA.tablename offers several advantages over SHOW commands:
......
---
sidebar_label: Statistics
title: Performance_Schema Database
sidebar_label: Statistics
description: This document describes how to use the PERFORMANCE_SCHEMA database in TDengine.
---
TDengine includes a built-in database named `PERFORMANCE_SCHEMA` to provide access to database performance statistics. This document introduces the tables of PERFORMANCE_SCHEMA and their structure.
......
---
sidebar_label: SHOW Statement
title: SHOW Statement for Metadata
sidebar_label: SHOW Statement
description: This document describes how to use the SHOW statement in TDengine.
---
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
......@@ -363,7 +364,7 @@ Shows information about all vgroups in the system or about the vgroups for a spe
## SHOW VNODES
```sql
SHOW VNODES [dnode_name];
SHOW VNODES {dnode_id | dnode_endpoint};
```
Shows information about all vnodes in the system or about the vnodes for a specified dnode.
---
sidebar_label: Access Control
title: User and Access Control
description: Manage user and user's permission
sidebar_label: Access Control
description: This document describes how to manage users and permissions in TDengine.
---
This document describes how to manage permissions in TDengine.
......
---
sidebar_label: User-Defined Functions
title: User-Defined Functions (UDF)
sidebar_label: User-Defined Functions
description: This document describes the SQL statements related to user-defined functions (UDF) in TDengine.
---
You can create user-defined functions and import them into TDengine.
......
---
sidebar_label: Index
title: Using Indices
title: Indexing
sidebar_label: Indexing
description: This document describes the SQL statements related to indexing in TDengine.
---
TDengine supports SMA and FULLTEXT indexing.
......
---
sidebar_label: Error Recovery
title: Error Recovery
sidebar_label: Error Recovery
description: This document describes the SQL statements related to error recovery in TDengine.
---
In a complex environment, connections and query tasks may encounter errors or fail to return in a reasonable time. If this occurs, you can terminate the connection or task.
......
---
sidebar_label: Changes in TDengine 3.0
title: Changes in TDengine 3.0
description: "This document explains how TDengine SQL has changed in version 3.0."
sidebar_label: Changes in TDengine 3.0
description: This document describes how TDengine SQL has changed in version 3.0 compared with previous versions.
---
## Basic SQL Elements
......
---
title: TDengine SQL
description: 'The syntax supported by TDengine SQL '
description: This document describes the syntax and functions supported by TDengine SQL.
---
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](../taos-sql/changes).
......
---
title: Install and Uninstall
description: Install, Uninstall, Start, Stop and Upgrade
description: This document describes how to install, upgrade, and uninstall TDengine.
---
import Tabs from "@theme/Tabs";
......
---
sidebar_label: Resource Planning
title: Resource Planning
sidebar_label: Resource Planning
description: This document describes how to plan compute and storage resources for your TDengine cluster.
---
It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter.
......
---
title: Fault Tolerance and Disaster Recovery
description: This document describes how TDengine provides fault tolerance and disaster recovery.
---
## Fault Tolerance
......
---
title: Data Import
description: This document describes how to import data into TDengine.
---
There are multiple ways of importing data provided by TDengine: import with script, import from data file, import using `taosdump`.
......
---
title: Data Export
description: This document describes how to export data from TDengine.
---
There are two ways of exporting data from a TDengine cluster:
......
---
title: TDengine Monitoring
description: This document describes how to monitor your TDengine cluster.
---
After TDengine is started, it automatically writes monitoring data including CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries, into a designated database at a predefined interval through taosKeeper. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console.
......
---
title: Problem Diagnostics
description: This document describes how to diagnose issues with your TDengine cluster.
---
## Network Connection Diagnostics
......
---
title: Administration
description: This document describes how to perform management operations on your TDengine cluster from an administrator's perspective.
---
This chapter is mainly written for system administrators. It covers download, install/uninstall, data import/export, system monitoring, user management, connection management, capacity planning and system optimization.
......
---
title: REST API
description: This document describes the TDengine REST API.
---
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
......
---
sidebar_label: C/C++
title: C/C++ Connector
sidebar_label: C/C++
description: This document describes the TDengine C/C++ connector.
---
C/C++ developers can use TDengine's client driver and the C/C++ connector, to develop their applications to connect to TDengine clusters for data writing, querying, and other functions. To use the C/C++ connector you must include the TDengine header file _taos.h_, which lists the function prototypes of the provided APIs. The application also needs to link to the corresponding dynamic libraries on the platform where it is located.
......
---
toc_max_heading_level: 4
sidebar_label: Java
title: TDengine Java Connector
description: The TDengine Java Connector is implemented on the standard JDBC API and provides native and REST connectors.
sidebar_label: Java
description: This document describes the TDengine Java Connector.
toc_max_heading_level: 4
---
import Tabs from '@theme/Tabs';
......
---
toc_max_heading_level: 4
sidebar_label: Go
title: TDengine Go Connector
sidebar_label: Go
description: This document describes the TDengine Go connector.
toc_max_heading_level: 4
---
import Tabs from '@theme/Tabs';
......
---
toc_max_heading_level: 4
sidebar_label: Rust
title: TDengine Rust Connector
sidebar_label: Rust
description: This document describes the TDengine Rust connector.
toc_max_heading_level: 4
---
import Tabs from '@theme/Tabs';
......
---
sidebar_label: Python
title: TDengine Python Connector
description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."
sidebar_label: Python
description: This document describes taospy, the TDengine Python connector.
---
import Tabs from "@theme/Tabs";
......@@ -32,7 +32,7 @@ We recommend using the latest version of `taospy`, regardless of the version of
### Preparation
1. Install Python. Python >= 3.7 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
1. Install Python. The recent taospy package requires Python 3.6+. The earlier versions of taospy require Python 3.7+. The taos-ws-py package requires Python 3.7+. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
......@@ -78,6 +78,22 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
</TabItem>
</Tabs>
#### Install `taos-ws-py` (Optional)
The taos-ws-py package provides the way to access TDengine via WebSocket.
##### Install taos-ws-py with taospy
```bash
pip3 install taospy[ws]
```
##### Install taos-ws-py only
```bash
pip3 install taos-ws-py
```
### Verify
<Tabs defaultValue="rest">
......
---
toc_max_heading_level: 4
sidebar_label: Node.js
title: TDengine Node.js Connector
sidebar_label: Node.js
description: This document describes the TDengine Node.js connector.
toc_max_heading_level: 4
---
import Tabs from "@theme/Tabs";
......@@ -59,9 +60,19 @@ Please refer to [version support list](/reference/connector#version-support)
- `python` (recommended for `v2.7` , `v3.x.x` currently not supported)
- `@tdengine/client` 3.0.0 supports Node.js LTS v10.9.0 or later and Node.js LTS v12.8.0 or later. Older versions may be incompatible.
- `make`
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or higher
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or later.
</TabItem>
<TabItem value="macOS" label="macOS installation dependencies">
- `python` (recommended for `v2.7` , `v3.x.x` currently not supported)
- `@tdengine/client` 3.0.0 currently supports Node.js from v12.22.12, but only later versions of v12. Other versions may be incompatible.
- `make`
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or later.
</TabItem>
<TabItem value="Windows" label="Windows system installation dependencies">
- Installation method 1
......@@ -249,4 +260,4 @@ let cursor = conn.cursor();
## API Reference
[API Reference](https://docs.taosdata.com/api/td2.0-connector/)
\ No newline at end of file
[API Reference](https://docs.taosdata.com/api/td2.0-connector/)
---
toc_max_heading_level: 4
sidebar_label: C#
title: C# Connector
sidebar_label: C#
description: This document describes the TDengine C# connector.
toc_max_heading_level: 4
---
import Tabs from '@theme/Tabs';
......
---
sidebar_label: PHP
title: PHP Connector
sidebar_label: PHP
description: This document describes the TDengine PHP connector.
---
`php-tdengine` is the TDengine PHP connector provided by TDengine community. In particular, it supports Swoole coroutine.
......
---
title: Connector
description: This document describes the connectors that TDengine provides to interface with various programming languages.
---
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
......@@ -59,11 +60,11 @@ The different database framework specifications for various programming language
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Not supported | Not supported | support | Support | Not supported | Support |
| **Subscription (TMQ) ** | Not supported | Not supported | support | Not supported | Not supported | Support |
| **Schemaless** | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | support | Support | Support |
| **DataFrame** | Not supported | Support | Not supported | Not supported | Not supported | Not supported |
| **Parameter Binding** | Not Supported | Not Supported | Support | Support | Not Supported | Support |
| **Subscription (TMQ) ** | Not Supported | Support | Support | Not Supported | Not Supported | Support |
| **Schemaless** | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
:::warning
......
---
title: "taosAdapter"
description: "taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine."
sidebar_label: "taosAdapter"
title: taosAdapter
sidebar_label: taosAdapter
description: This document describes how to use taosAdapter, a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications.
---
import Prometheus from "./_prometheus.mdx"
......
---
title: taosBenchmark
sidebar_label: taosBenchmark
description: This document describes how to use taosBenchmark, a tool for testing the performance of TDengine.
toc_max_heading_level: 4
description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
---
# Introduction
......
---
title: taosdump
description: "taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster."
description: This document describes how to use taosdump, a tool for backing up and restoring the data in a TDengine cluster.
---
## Introduction
......
---
title: TDinsight - Grafana-based Zero-Dependency Monitoring Solution for TDengine
sidebar_label: TDinsight
description: This document describes TDinsight, a monitoring solution for TDengine.
---
TDinsight is a solution for monitoring TDengine using the builtin native monitoring database and [Grafana].
......@@ -325,11 +326,12 @@ Currently, only the number of logins per minute is reported.
Support monitoring taosAdapter request statistics and status details. Includes.
1. **http_request_inflight**: number of real-time requests.
2. **http_request_total**: number of total requests.
3. **http_request_fail**: number of failed requets.
4. **CPU Used**: CPU usage of taosAdapter.
5. **Memory Used**: Memory usage of taosAdapter.
1. **Http Request Total**: number of total requests.
2. **Http Request Fail**: number of failed requests.
3. **CPU Used**: CPU usage of taosAdapter.
4. **Memory Used**: Memory usage of taosAdapter.
5. **Http Request Inflight**: number of real-time requests.
6. **Http Status Code**: taosAdapter http status code.
## Upgrade
......
---
title: TDengine Command Line Interface (CLI)
sidebar_label: Command Line Interface
description: Instructions and tips for using the TDengine CLI
description: This document describes how to use the TDengine CLI.
---
The TDengine command-line interface (hereafter referred to as `TDengine CLI`) is the simplest way for users to manipulate and interact with TDengine instances.
......
---
title: List of supported platforms
description: "List of platforms supported by TDengine server, client, and connector"
description: This document describes the supported platforms for the TDengine server, client, and connectors.
---
## List of supported platforms for TDengine server
......
---
title: Deploying TDengine with Docker
description: "This chapter focuses on starting the TDengine service in a container and accessing it."
description: This chapter describes how to start and access TDengine in a Docker container.
---
This chapter describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command-line or in the docker-compose file.
......
---
title: Configuration Parameters
description: "Configuration parameters for client and server in TDengine"
description: This document describes the configuration parameters for the TDengine server and client.
---
## Configuration File on Server Side
......@@ -162,11 +162,7 @@ The parameters described in this document by the effect that they have on the sy
| Meaning | Execution policy for query statements |
| Unit | None |
| Default | 1 |
| Value Range | 1: Run queries on vnodes and not on qnodes
2: Run subtasks without scan operators on qnodes and subtasks with scan operators on vnodes.
3: Only run scan operators on vnodes; run all other operators on qnodes. |
| Value Range | 1: Run queries on vnodes and not on qnodes; 2: Run subtasks without scan operators on qnodes and subtasks with scan operators on vnodes; 3: Only run scan operators on vnodes, and run all other operators on qnodes. |
### querySmaOptimize
......@@ -176,11 +172,7 @@ The parameters described in this document by the effect that they have on the sy
| Meaning | SMA index optimization policy |
| Unit | None |
| Default Value | 0 |
| Notes |
0: Disable SMA indexing and perform all queries on non-indexed data.
1: Enable SMA indexing and perform queries from suitable statements on precomputation results.|
| Notes |0: Disable SMA indexing and perform all queries on non-indexed data; 1: Enable SMA indexing and perform queries from suitable statements on precomputation results.|
### countAlwaysReturnValue
......@@ -323,6 +315,7 @@ The charset that takes effect is UTF-8.
| Applicable | Server Only |
| Meaning | All data files are stored in this directory |
| Default Value | /var/lib/taos |
| Note | The [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function needs to be used in conjunction with the [KEEP](https://docs.tdengine.com/taos-sql/database/#parameters) parameter |
### tempDir
......@@ -603,7 +596,7 @@ The charset that takes effect is UTF-8.
| Attribute | Description |
| -------- | ----------------------------- |
| Applicable | Client only |
| Meaning | Whether schemaless columns are consistently ordered |
| Meaning | Whether schemaless columns are consistently ordered, depat, discarded since 3.0.3.0|
| Value Range | 0: not consistent; 1: consistent. |
| Default | 1 |
......@@ -665,7 +658,7 @@ The charset that takes effect is UTF-8.
| 20 | minimalTmpDirGB | Yes | Yes | |
| 21 | smlChildTableName | Yes | Yes | |
| 22 | smlTagName | Yes | Yes | |
| 23 | smlDataFormat | No | Yes | |
| 23 | smlDataFormat | No | Yes(discarded since 3.0.3.0) | |
| 24 | statusInterval | Yes | Yes | |
| 25 | logDir | Yes | Yes | |
| 26 | minimalLogDirGB | Yes | Yes | |
......
---
title: File directory structure
description: "TDengine installation directory description"
description: This document describes the structure of the TDengine directory after installation.
---
After TDengine is installed, the following directories or files will be created in the system by default.
......
---
title: Schemaless Writing
description: 'The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data, as it is written to the interface.'
description: This document describes how to use the schemaless write component of TDengine.
---
In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing
......@@ -80,7 +80,7 @@ You can configure smlChildTableName in taos.cfg to specify table names, for exam
NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
7. Errors encountered throughout the processing will interrupt the writing process and return an error code.
8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3)
8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3, discarded since 3.0.3.0)
:::tip
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
......
---
sidebar_label: taosKeeper
title: taosKeeper
description: exports TDengine monitoring metrics.
description: This document describes how to use taosKeeper, a tool for exporting TDengine monitoring metrics.
---
## Introduction
......
---
title: Reference
description: This document describes TDengine connectors and utilities.
---
This section describes the TDengine connectors and utilities.
......
---
sidebar_label: Grafana
title: Grafana
sidebar_label: Grafana
description: This document describes how to integrate TDengine with Grafana.
---
import Tabs from "@theme/Tabs";
......@@ -155,13 +156,13 @@ You can setup a zero-configuration stack for TDengine + Grafana by [docker-compo
services:
tdengine:
image: tdengine/tdengine:2.6.0.2
image: tdengine/tdengine:3.0.2.4
environment:
TAOS_FQDN: tdengine
volumes:
- tdengine-data:/var/lib/taos/
grafana:
image: grafana/grafana:8.5.6
image: grafana/grafana:9.3.6
volumes:
- ./tdengine.yml/:/etc/grafana/provisioning/tdengine.yml
- grafana-data:/var/lib/grafana
......@@ -196,11 +197,18 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
- INPUT SQL: Enter the desired query (the results being two columns and multiple rows), such as `select _wstart, avg(mem_system) from log.dnodes_info where ts >= $from and ts < $to interval($interval)`. In this statement, $from, $to, and $interval are variables that Grafana replaces with the query time range and interval. In addition to the built-in variables, custom template variables are also supported.
- ALIAS BY: This allows you to set the current query alias.
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
- Group by column name(s): `group by` or `partition by` columns name split by comma. By setting `Group by column name(s)`, it can show multi-dimension data if Sql is `group by` or `partition by`. Such as, it can show data by `dnode_ep` if sql is `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)` and `Group by column name(s)` is `dnode_ep`.
- Format to: format legend for `group by` or `partition by`. Such as it can display series data by `dnode_ep` if sql is `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)` and `Group by column name(s)` is `dnode_ep` and `Format to` is `mem_system_{{dnode_ep}}`.
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
![TDengine Database TDinsight plugin create dashboard 2](./grafana/create_dashboard2.webp)
查询每台 TDengine 服务器指定间隔系统内存平均使用量如下.
The example to query the average system memory usage for the specified interval on each server as follows.
![TDengine Database TDinsight plugin create dashboard 2](./grafana/create_dashboard3.webp)
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
### Importing the Dashboard
......
---
sidebar_label: Prometheus
title: Prometheus writing and reading
sidebar_label: Prometheus
description: This document describes how to integrate TDengine with Prometheus.
---
import Prometheus from "../14-reference/_prometheus.mdx"
......
---
sidebar_label: Telegraf
title: Telegraf writing
sidebar_label: Telegraf
description: This document describes how to integrate TDengine with Telegraf.
---
import Telegraf from "../14-reference/_telegraf.mdx"
......
---
sidebar_label: collectd
title: collectd writing
sidebar_label: collectd
description: This document describes how to integrate TDengine with collectd.
---
import CollectD from "../14-reference/_collectd.mdx"
......
---
sidebar_label: StatsD
title: StatsD Writing
sidebar_label: StatsD
description: This document describes how to integrate TDengine with StatsD.
---
import StatsD from "../14-reference/_statsd.mdx"
......
---
sidebar_label: icinga2
title: icinga2 writing
sidebar_label: icinga2
description: This document describes how to integrate TDengine with icinga2.
---
import Icinga2 from "../14-reference/_icinga2.mdx"
......
---
sidebar_label: TCollector
title: TCollector writing
sidebar_label: TCollector
description: This document describes how to integrate TDengine with TCollector.
---
import TCollector from "../14-reference/_tcollector.mdx"
......
---
sidebar_label: EMQX Broker
title: EMQX Broker writing
sidebar_label: EMQX Broker
description: This document describes how to integrate TDengine with the EMQX broker.
---
MQTT is a popular IoT data transfer protocol. [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software. You can write MQTT data directly to TDengine without any code. You only need to setup "rules" in EMQX Dashboard to create a simple configuration. EMQX supports saving data to TDengine by sending data to a web service and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it.).
......
---
sidebar_label: HiveMQ Broker
title: HiveMQ Broker Writing
sidebar_label: HiveMQ Broker
description: This document describes how to integrate TDengine with the HiveMQ broker.
---
[HiveMQ](https://www.hivemq.com/) is an MQTT broker that provides community and enterprise editions. HiveMQ is mainly for enterprise emerging machine-to-machine M2M communication and internal transport, meeting scalability, ease of management, and security features. HiveMQ provides an open-source plug-in development kit. MQTT data can be saved to TDengine via TDengine extension for HiveMQ. For more information, see [HiveMQ TDengine Extension](https://github.com/huskar-t/hivemq-tdengine-extension/blob/b62a26ecc164a310104df57691691b237e091c89/README_EN.md).
---
sidebar_label: Kafka
title: TDengine Kafka Connector Tutorial
sidebar_label: Kafka
description: This document describes how to integrate TDengine with Kafka.
---
TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDengine Sink Connector. Users only need to provide a simple configuration file to synchronize the data of the specified topic in Kafka (batch or real-time) to TDengine or synchronize the data (batch or real-time) of the specified database in TDengine to Kafka.
......
---
sidebar_label: Google Data Studio
title: Use Google Data Studio to access TDengine
sidebar_label: Google Data Studio
description: This document describes how to integrate TDengine with Google Data Studio.
---
Data Studio is a powerful tool for reporting and visualization, offering a wide variety of charts and connectors and making it easy to generate reports based on predefined templates. Its ease of use and robust ecosystem have made it one of the first choices for people working in data analysis.
......
---
sidebar_label: JupyterLab
title: Connect JupyterLab to TDengine
sidebar_label: JupyterLab
description: This document describes how to integrate TDengine with JupyterLab.
---
JupyterLab is the next generation of the ubiquitous Jupyter Notebook. In this note we show you how to install the TDengine Python connector to connect to TDengine in JupyterLab. You can then insert data and perform queries against the TDengine instance within JupyterLab.
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册