diff --git a/docs/en/07-tools/02-taosdump.md b/docs/en/07-tools/02-taosdump.md deleted file mode 100644 index bf7a3db40f86349348ffee7746bec69686cdf97e..0000000000000000000000000000000000000000 --- a/docs/en/07-tools/02-taosdump.md +++ /dev/null @@ -1 +0,0 @@ -# taosDump \ No newline at end of file diff --git a/docs/en/07-tools/04-taosx.md b/docs/en/07-tools/02-taosx.md similarity index 100% rename from docs/en/07-tools/04-taosx.md rename to docs/en/07-tools/02-taosx.md diff --git a/docs/en/07-tools/03-taosbenchmark.md b/docs/en/07-tools/03-taosbenchmark.md index f752b891f5032754d9a92200f9a2ecba6515946b..2ad48cd3e5ec2cc134dbb7f94dd5ec442d73cba7 100644 --- a/docs/en/07-tools/03-taosbenchmark.md +++ b/docs/en/07-tools/03-taosbenchmark.md @@ -1 +1,427 @@ -# taosBenchMark \ No newline at end of file +# taosBenchmark + +taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users. + +## Installation + +There are two ways to install taosBenchmark: + +- Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details. + +- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details. + +## Run + +### Configuration and running methods + +TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#Command-line arguments in detailed) and [JSON configuration file](#Configuration file arguments in detailed). These two methods are mutually exclusive. Users can use `-f ` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters. + +taosBenchmark supports the complete performance testing of TDengine by providing functionaly to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file. + +**Make sure that the TDengine cluster is running correctly before running taosBenchmark. ** + +### Run without command-line arguments + +Execute the following commands to quickly experience taosBenchmark's default configuration-based write performance testing of TDengine. + +```bash +taosBenchmark +``` + +When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named test in TDengine, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a test database, this command will delete it first and create a new test database. + +### Run with command-line configuration parameters + +The `-f ` argument cannot be used when running taosBenchmark with command-line parameters. Users must specify all configuration parameters from the command-line. The following is an example of testing taosBenchmark writing performance using the command-line approach. + +```bash +taosBenchmark -I stmt -n 200 -t 100 +``` + +In the above command, `taosBenchmark` will create the default database named `test`, create the default super table named `meters`, create 100 subtables in the super table and insert 200 records for each subtable using parameter binding. + +### Run with the configuration file + +A sample configuration file is provided in the taosBenchmark installation package under `/examples/taosbenchmark-json`. + +Use the following command-line to run taosBenchmark and control its behavior via a configuration file. + +```bash +taosBenchmark -f +``` + +**Here are a few examples of configuration files:** + +#### Example of inserting a scenario JSON configuration file + +
+insert.json + +```json +{{#include /taos-tools/example/insert.json}} +``` + +
+ +#### Query Scenario JSON Profile Example + +
+query.json + +```json +{{#include /taos-tools/example/query.json}} +``` + +
+ +#### Subscription JSON configuration example + +
+subscribe.json + +```json +{{#include /taos-tools/example/subscribe.json}} +``` + +
+ +## Command-line arguments in detail + +- **-f/--file ** : + specify the configuration file to use. This file includes All parameters. Users should not use this parameter with other parameters on the command-line. There is no default value. + +- **-c/--config-dir ** : + specify the directory of the TDengine cluster configuration file. the default path is `/etc/taos`. + +- **-h/--host ** : + specify the FQDN of the TDengine server to connect to. The default value is localhost. + +- **-P/--port ** : + specify the port number of the TDengine server to connect to, the default value is 6030. + +- **-I/--interface ** : + specify the insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc. + +- **-u/--user ** : + specify the user name to connect to the TDengine server, the default is root. + +- **-p/--password ** : + specify the password to connect to the TDengine server, the default is `taosdata`. + +- **-o/--output ** : + specify the path of the result output file, the default value is `. /output.txt`. + +- **-T/--thread ** : + specify the number of threads to insert data, the default value is 8. + +- **-B/--interlace-rows ** : + enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted. + +- **-i/--insert-interval ** : + specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. That means that after inserting interlaced rows for each child table, the data insertion with multiple threads will wait for the interval specified by this value before proceeding to the next round of writes. + +- **-r/--rec-per-req ** : + specify the number of rows to write per request, the default value is 30000. + +- **-t/--tables ** : + specify the number of subtables to create, the default value is 10000. + +- **-S/--timestampstep ** : + specify the timestamp step between records when inserting data in each child table in ms, the default value is 1. + +- **-n/--records ** : + specify the number of records inserted into each sub-table, the default value is 10000. + +- **-d/--database ** : + specify the name of the database used, the default value is `test`. + +- **-b/--data-type ** : + specify the data column types of the super table. The default values are three columns of type FLOAT, INT, and FLOAT. + +- **-l/--columns ** : + specify the number of columns in the super table. If both this parameter and `-b/--data-type` are set, the resulting number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column types default to INT, for example: `-l 5 -b float,double`, then the column types are `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the columns specified by `-b/--data-type` will be used. e.g.: `-l 3 -b float,double,float,bigint` will result in the column types `FLOAT,DOUBLE,FLOAT,BIGINT`. + +- **-A/--tag-type ** : + specify the tag column types of the super table. nchar and binary types can both set the length, for example: + +``` +taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16) +``` + +If the user does not set the tag type, the default is two tags, whose types are INT and BINARY(16). +Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be + +``` +taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) +``` + +- **-w/--binwidth **: + specify the default length for nchar and binary types, the default value is 64. + +- **-m/--table-prefix ** : + specify the prefix of the sub-table names, the default value is "d". + +- **-E/--escape-character** : + specify whether to use escape characters in the super table and sub-table names, the default is no. + +- **-C/--chinese** : + specify whether to use Unicode Chinese characters in nchar and binary, the deault is no. + +- **-N/--normal-table** : + specify whether taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest. + +- **-M/--random** : + specify whether taosBenchmark will generate random values. The default is false. When true, for tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag/data columns, the value is a random string within the specified length range. + +- **-x/--aggr-func** : + specify whether to query aggregation function after insertion. The default value is false. + +- **-y/--answer-yes** : + specify whether to require the user to confirm at the prompt to continue. The default value is false. + +- **-O/--disorder ** : + specify the percentage probability of disordered data, with a value range of [0,50]. The default value is 0, i.e., there is no disordered data. + +- **-R/--disorder-range ** : + specify the timestamp range for the disordered data. The disordered timestamp data will be out of order by the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0. + +- **-F/--prepare_rand ** : + specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000. + +- **-a/--replica ** : + specify the number of replicas when creating the database. The default value is 1. + +- **-V/--version** : + Show version information only. Users should not use this with other parameters. + +- **-? /--help** : + Show help information and exit. Users should not use it with other parameters. + +## Configuration file parameters in detail + +### General configuration parameters + +The parameters listed in this section apply to all function modes. + +- **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file. +**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos. + +- **host**: specify the FQDN of the TDengine server to connect to. The default value is `localhost`. + +- **port**: specify the port number of the TDengine server to connect to, the default value is `6030`. + +- **user**: specify the user name to connect to the TDengine server, the default is `root`. + +- **password**: specify the password to connect to the TDengine server, the default value is `taosdata`. + +### Insert scenario configuration parameters + +`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#general-configuration-parameters) + +#### Database related configuration parameters + +The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. These parameters correspond to the database parameters specified when `create database` in TDengine. + +- **name**: specify the name of the database. + +- **drop**: indicate whether to delete the database before inserting. The default is true. + +- **replica**: specify the number of replicas when creating the database. + +- **days**: specify the time span for storing data in a single data file. The default is 10. + +- **cache**: specify the size of the cache blocks in MB. The default value is 16. + +- **blocks**: specify the number of cache blocks in each vnode. The default is 6. + +- **precision**: specify the database time precision. The default value is "ms". + +- **keep**: specify the number of days to keep the data. The default value is 3650. + +- **minRows**: specify the minimum number of records in the file block. The default value is 100. + +- **maxRows**: specify the maximum number of records in the file block. The default value is 4096. + +- **comp**: specify the file compression level. The default value is 2. + +- **walLevel** : specify WAL level, default is 1. + +- **cacheLast**: indicate whether to allow the last record of each table to be kept in memory. The default value is 0. The value can be 0, 1, 2, or 3. + +- **quorum**: specify the number of writing acknowledgments in multi-replica mode. The default value is 1. + +- **fsync**: specify the interval of fsync in ms when users set WAL to 2. The default value is 3000. + +- **update** : indicate whether to support data update, default value is 0, values can be 0, 1, 2. + +#### Super table related configuration parameters + +The parameters for creating super tables are configured in `super_tables` in the json configuration file, as shown below. + +- **name**: Super table name, mandatory, no default value. +- **child_table_exists** : whether the child table already exists, default value is "no", values can be "yes" or "no". + +- **child_table_count** : The number of child tables, the default value is 10. + +- **child_table_prefix** : The prefix of the child table name, mandatory configuration item, no default value. + +- **escape_character**: specify whether the super table and child table names containing escape characters. By default is "no". The value can be "yes" or "no". + +- **auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting. + +- **batch_create_tbl_num** : the number of tables per batch when creating sub-tables, default is 10. Note: the actual number of batches may not be the same as this value. If the executed SQL statement is larger than the maximum length supported, it will be automatically truncated and re-executed to continue creating. + +- **data_source**: specify the source of data-generation. Default is taosBenchmark randomly generated. Users can configure it as "rand" and "sample". When "sample" is used, taosBenchmark will use the data in the file specified by the `sample_file` parameter. + +- **insert_mode**: insertion mode with options taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface write, parameter binding interface write, schemaless interface write, restful schemaless interface write (provided by taosAdapter). The default value is taosc. + +- **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it is disabled in continuous write mode. + +- **line_protocol**: Insert data using line protocol. Only works when insert_mode is sml or sml-rest. The value can be `line`, `telnet`, or `json`. + +- **tcp_transfer**: Communication protocol in telnet mode only takes effect when insert_mode is sml-rest, and line_protocol is telnet. If not configured, the default protocol is http. + +- **insert_rows** : The number of inserted rows per child table, default is 0. + +- **childtable_offset**: Effective only if childtable_exists is yes, specifies the offset when fetching the list of child tables from the super table, i.e., starting from the first child table. + +- **childtable_limit**: Effective only when childtable_exists is yes, specifies the upper limit for fetching the list of child tables from the super table. + +- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Staggered insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted. + +- **insert_interval** : Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes. + +- **partial_col_num**: If this value is a positive number n, only the first n columns are written to, only if insert_mode is taosc and rest, or all columns if n is 0. + +- **disorder_ratio** : Specifies the percentage probability of disordered (i.e. out-of-order) data in the value range [0,50]. The default is 0, which means there is no disorder data. + +- **disorder_range** : Specifies the timestamp fallback range for the disordered data. The disordered timestamp is generated by subtracting a random value in this range, from the timestamp that would be used in the non-disorder case. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0. + +- **timestamp_step**: The timestamp step for inserting data in each child table, in units consistent with the `precision` of the database. For e.g. if the `precision` is milliseconds, the timestamp step will be in milliseconds. The default value is 1. + +- **start_timestamp** : The timestamp start value of each sub-table, the default value is now. + +- **sample_format**: The type of the sample data file; for now only "csv" is supported. + +- **sample_file**: Specify a CSV format file as the data source. It only works when data_source is a sample. If the number of rows in the CSV file is less than or equal to prepared_rand, then taosBenchmark will read the CSV file data cyclically until it is the same as prepared_rand; otherwise, taosBenchmark will read only the rows with the number of prepared_rand. The final number of rows of data generated is the smaller of the two. + +- **use_sample_ts**: effective only when data_source is `sample`, indicates whether the CSV file specified by sample_file contains the first timestamp column. Default is no. If set to yes, the first column of the CSV file is used as `timestamp`. Since the timestamp of the same sub-table cannot be repeated, the amount of data generated depends on the same number of rows of data in the CSV file, and insert_rows will be invalidated. + +- **tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two. + +#### Tag and Data Column Configuration Parameters + +The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively. + +- **type**: Specify the column type. For optional values, please refer to the data types supported by TDengine. + Note: JSON data type is unique and can only be used for tags. When using JSON type as a tag, there is and can only be this one tag. At this time, `count` and `len` represent the meaning of the number of key-value pairs within the JSON tag and the length of the value of each KV pair. Respectively, the value is a string by default. + +- **len**: Specifies the length of this data type, valid for NCHAR, BINARY, and JSON data types. If this parameter is configured for other data types, a value of 0 means that the column is always written with a null value; if it is not 0, it is ignored. + +- **count**: Specifies the number of consecutive occurrences of the column type, e.g., "count": 4096 generates 4096 columns of the specified type. + +- **name** : The name of the column, if used together with count, e.g. "name": "current", "count":3, then the names of the 3 columns are current, current_2. current_3. + +- **min**: The minimum value of the column/label of the data type. + +- **max**: The maximum value of the column/label of the data type. + +- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values. + +#### insertion behavior configuration parameters + +- **thread_count**: specify the number of threads to insert data. Default is 8. + +- **create_table_thread_count** : The number of threads to build the table, default is 8. + +- **connection_pool_size** : The number of pre-established connections to the TDengine server. If not configured, it is the same as number of threads specified. + +- **result_file** : The path to the result output file, the default value is . /output.txt. + +- **confirm_parameter_prompt**: The switch parameter requires the user to confirm after the prompt to continue. The default value is false. + +- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables are inserted. The default value is 0, which means that data will be inserted into the following child table only after data is inserted into one child table. + This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting. + +- **insert_interval** : + Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. Only works if `-B/--interlace-rows` is greater than 0. It means that after inserting interlace rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes. + This parameter can also be configured in `super_tables`, and if configured, the configuration in `super_tables` takes high priority, overriding the global setting. + +- **num_of_records_per_req** : + The number of rows of data to be written per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements. + +- **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are the same. The default value is 10000. + +### Query scenario configuration parameters + +`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#general-configuration-parameters) for details of this parameter and other general parameters + +#### Configuration parameters for executing the specified query statement + +The configuration parameters for querying the sub-tables or the normal tables are set in `specified_table_query`. + +- **query_interval** : The query interval in seconds, the default value is 0. + +- **threads**: The number of threads to execute the query SQL, the default value is 1. + +- **sqls**. + - **sql**: the SQL command to be executed. + - **result**: the file to save the query result. If it is unspecified, taosBenchark will not save the result. + +#### Configuration parameters of query super table + +The configuration parameters of the super table query are set in `super_table_query`. + +- **stblname**: Specify the name of the super table to be queried, required. + +- **query_interval** : The query interval in seconds, the default value is 0. + +- **threads**: The number of threads to execute the query SQL, the default value is 1. + +- **sqls** : The default value is 1. + - **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table. + Replace it with all the sub-table names in the super table. + - **result**: The file to save the query result. If not specified, taosBenchmark will not save result. + +### Subscription scenario configuration parameters + +`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#genera-configuration-parameters) for details of this and other general parameters + +#### Configuration parameters for executing the specified subscription statement + +The configuration parameters for subscribing to a sub-table or a generic table are set in `specified_table_query`. + +- **threads**: The number of threads to execute SQL, default is 1. + +- **interval**: The time interval to execute the subscription, in seconds, default is 0. + +- **restart** : "yes" means start a new subscription, "no" means continue the previous subscription, the default value is "no". + +- **keepProgress**: "yes" means keep the progress of the subscription, "no" means don't keep it, and the default value is "no". + +- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no". + +- **sqls** : The default value is "no". + - **sql** : The SQL command to be executed, required. + - **result** : The file to save the query result, unspecified is not saved. + +#### Configuration parameters for subscribing to supertables + +The configuration parameters for subscribing to a super table are set in `super_table_query`. + +- **stblname**: The name of the super table to subscribe. + +- **threads**: The number of threads to execute SQL, default is 1. + +- **interval**: The time interval to execute the subscription, in seconds, default is 0. + +- **restart** : "yes" means start a new subscription, "no" means continue the previous subscription, the default value is "no". + +- **keepProgress**: "yes" means keep the progress of the subscription, "no" means don't keep it, and the default value is "no". + +- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no". + +- **sqls** : The default value is "no". + - **sql**: SQL command to be executed, required; for the query SQL of the super table, keep "xxxx" in the SQL command, and the program will replace it with all the sub-table names of the super table automatically. + Replace it with all the sub-table names in the super table. + - **result**: The file to save the query result, if not specified, it will not be saved. diff --git a/docs/en/07-tools/04-taosdump.md b/docs/en/07-tools/04-taosdump.md new file mode 100644 index 0000000000000000000000000000000000000000..839f5889bedb7689d4dfa660457eaea2a593b0e9 --- /dev/null +++ b/docs/en/07-tools/04-taosdump.md @@ -0,0 +1,114 @@ +# taosDump + +taosdump is a tool that supports backing up data from a running TDengine cluster and restoring the backed up data to the same, or another running TDengine cluster. + +taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default. + +If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means that the same path can only be used for one backup. + +Please be careful if you see a prompt for this and please ensure that you follow best practices and relevant SOPs for data integrity, backup and data security. + +Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data. + +## Installation + +There are two ways to install taosdump: + +- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it. + +- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details. + +## Common usage scenarios + +### taosdump backup data + +1. backing up all databases: specify `-A` or `-all-databases` parameter. +2. backup multiple specified databases: use `-D db1,db2,... ` parameters; +3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces. +4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter. +5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use escape characters. This can also reduce the backup data time and backup data footprint. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters. + + +:::tip +- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema. +- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value. + +::: + + +### taosdump recover data + +Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups. + + +:::tip +taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter. + +::: + + +## Detailed command-line parameter list + +The following is a detailed list of taosdump command-line arguments. + +``` +Usage: taosdump [OPTION...] dbname [tbname ...] + or: taosdump [OPTION...] --databases db1,db2,... + or: taosdump [OPTION...] --all-databases + or: taosdump [OPTION...] -i inpath + or: taosdump [OPTION...] -o outpath + + -h, --host=HOST Server host from which to dump data. Default is + localhost. + -p, --password User password to connect to server. Default is + taosdata. + -P, --port=PORT Port to connect + -u, --user=USER User name used to connect to server. Default is + root. + -c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos + -i, --inpath=INPATH Input file path. + -o, --outpath=OUTPATH Output file path. + -r, --resultFile=RESULTFILE DumpOut/In Result file path and name. + -a, --allow-sys Allow to dump system database + -A, --all-databases Dump all databases. + -D, --databases=DATABASES Dump listed databases. Use comma to separate + database names. + -N, --without-property Dump database without its properties. + -s, --schemaonly Only dump table schemas. + -y, --answer-yes Input yes for prompt. It will skip data file + checking! + -d, --avro-codec=snappy Choose an avro codec among null, deflate, snappy, + and lzma. + -S, --start-time=START_TIME Start time to dump. Either epoch or + ISO8601/RFC3339 format is acceptable. ISO8601 + format example: 2017-10-01T00:00:00.000+0800 or + 2017-10-0100:00:00:000+0800 or '2017-10-01 + 00:00:00.000+0800' + -E, --end-time=END_TIME End time to dump. Either epoch or ISO8601/RFC3339 + format is acceptable. ISO8601 format example: + 2017-10-01T00:00:00.000+0800 or + 2017-10-0100:00:00.000+0800 or '2017-10-01 + 00:00:00.000+0800' + -B, --data-batch=DATA_BATCH Number of data per query/insert statement when + backup/restore. Default value is 16384. If you see + 'error actual dump .. batch ..' when backup or if + you see 'WAL size exceeds limit' error when + restore, please adjust the value to a smaller one + and try. The workable value is related to the + length of the row and type of table schema. + -I, --inspect inspect avro file content and print on screen + -L, --loose-mode Use loose mode if the table name and column name + use letter and number only. Default is NOT. + -n, --no-escape No escape char '`'. Default is using it. + -T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is + 5. + -g, --debug Print debug info. + -?, --help Give this help list + --usage Give a short usage message + -V, --version Print program version + +Mandatory or optional arguments to long options are also mandatory or optional +for any corresponding short options. + +Report bugs to . +``` diff --git a/docs/en/08-third-party/02-prometheus.md b/docs/en/08-third-party/02-prometheus.md new file mode 100644 index 0000000000000000000000000000000000000000..2d1b482a5f9d16b799437888a0cfa3123b1f2b7d --- /dev/null +++ b/docs/en/08-third-party/02-prometheus.md @@ -0,0 +1,116 @@ +# Prometheus + +Prometheus is a widespread open-source monitoring and alerting system. Prometheus joined the Cloud Native Computing Foundation (CNCF) in 2016 as the second incubated project after Kubernetes, which has a very active developer and user community. + +Prometheus provides `remote_write` and `remote_read` interfaces to leverage other database products as its storage engine. To enable users of the Prometheus ecosystem to take advantage of TDengine's efficient writing and querying, TDengine also provides support for these two interfaces. + +Prometheus data can be stored in TDengine via the `remote_write` interface with proper configuration. Data stored in TDengine can be queried via the `remote_read` interface, taking full advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data. + +## Prerequisites + +To write Prometheus data to TDengine requires the following preparations. +- The TDengine cluster is deployed and functioning properly +- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details. +- Prometheus has been installed. Please refer to the [official documentation](https://prometheus.io/docs/prometheus/latest/installation/) for installing Prometheus + +## Configuration steps + +Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location `/etc/prometheus/prometheus.yml`). + +### Configuring third-party database addresses + +Point the `remote_read url` and `remote_write url` to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter uses 6041 by default), and the name of the database you want to write to TDengine, and ensure that the corresponding URL form as follows. + +- remote_read url : `http://:/prometheus/v1/remote_read/` +- remote_write url : `http://:/prometheus/v1/remote_write/` + +### Configure Basic authentication + +- username: +- password: + +### Example configuration of remote_write and remote_read related sections in prometheus.yml file + +```yaml +remote_write: + - url: "http://localhost:6041/prometheus/v1/remote_write/prometheus_data" + basic_auth: + username: root + password: taosdata + +remote_read: + - url: "http://localhost:6041/prometheus/v1/remote_read/prometheus_data" + basic_auth: + username: root + password: taosdata + remote_timeout: 10s + read_recent: true +``` + +## Verification method + +After restarting Prometheus, you can refer to the following example to verify that data is written from Prometheus to TDengine and can read out correctly. + +### Query and write data using TDengine CLI + +``` +taos> show databases; + name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | +==================================================================================================================================================================================================================================================================================== + test | 2022-04-12 08:07:58.756 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | + log | 2022-04-20 07:19:50.260 | 2 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | + prometheus_data | 2022-04-20 07:21:09.202 | 158 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | + db | 2022-04-15 06:37:08.512 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | +Query OK, 4 row(s) in set (0.000585s) + +taos> use prometheus_data; +Database changed. + +taos> show stables; + name | created_time | columns | tags | tables | +============================================================================================ + metrics | 2022-04-20 07:21:09.209 | 2 | 1 | 1389 | +Query OK, 1 row(s) in set (0.000487s) + +taos> select * from metrics limit 10; + ts | value | labels | +============================================================================================= + 2022-04-20 07:21:09.193000000 | 0.000024996 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:14.193000000 | 0.000024996 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:19.193000000 | 0.000024996 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:24.193000000 | 0.000024996 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:29.193000000 | 0.000024996 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:09.193000000 | 0.000054249 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:14.193000000 | 0.000054249 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:19.193000000 | 0.000054249 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:24.193000000 | 0.000054249 | {"__name__":"go_gc_duration... | + 2022-04-20 07:21:29.193000000 | 0.000054249 | {"__name__":"go_gc_duration... | +Query OK, 10 row(s) in set (0.011146s) +``` + +### Use promql-cli to read data from TDengine via remote_read + +Install promql-cli + +``` + go install github.com/nalbury/promql-cli@latest +``` + +Query Prometheus data in the running state of TDengine and taosAdapter services + +``` +ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)" +JOB VALUE TIMESTAMP +prometheus 1 2022-04-20T08:05:26Z +node 1 2022-04-20T08:05:26Z +``` + +Stop taosAdapter service and query Prometheus data to verify + +``` +ubuntu@shuduo-1804 ~ $ sudo systemctl stop taosadapter.service +ubuntu@shuduo-1804 ~ $ promql-cli --host "http://127.0.0.1:9090" "sum(up) by (job)" +VALUE TIMESTAMP + +``` + diff --git a/docs/en/08-third-party/03-telegraf.md b/docs/en/08-third-party/03-telegraf.md new file mode 100644 index 0000000000000000000000000000000000000000..4ea77e8135a62fb4da301d72ec84be8e2a947b5b --- /dev/null +++ b/docs/en/08-third-party/03-telegraf.md @@ -0,0 +1,88 @@ +# Telegraf + +Telegraf is a viral, open-source, metrics collection software. Telegraf can collect the operation information of various components without having to write any scripts to collect regularly, reducing the difficulty of data acquisition. + +Telegraf's data can be written to TDengine by simply adding the output configuration of Telegraf to the URL corresponding to taosAdapter and modifying several configuration items. The presence of Telegraf data in TDengine can take advantage of TDengine's efficient storage query performance and clustering capabilities for time-series data. + +## Prerequisites + +To write Telegraf data to TDengine requires the following preparations. +- The TDengine cluster is deployed and functioning properly +- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details. +- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation. + +## Configuration steps + + +In the Telegraf configuration file (default location `/etc/telegraf/telegraf.conf`) add an `outputs.http` section. + +``` +[[outputs.http]] + url = "http://:/influxdb/v1/write?db=" + ... + username = "" + password = "" + ... +``` + +Where please fill in the server's domain name or IP address running the taosAdapter service. please fill in the port of the REST service (default is 6041). and please fill in the actual configuration of the currently running TDengine. And please fill in the database name where you want to store Telegraf data in TDengine. + +An example is as follows. + +``` +[[outputs.http]] + url = "http://127.0.0.1:6041/influxdb/v1/write?db=telegraf" + method = "POST" + timeout = "5s" + username = "root" + password = "taosdata" + data_format = "influx" + influx_max_line_bytes = 250 +``` + +## Verification method + +Restart Telegraf service: + +``` +sudo systemctl restart telegraf +``` + +Use TDengine CLI to verify Telegraf correctly writing data to TDengine and read out: + +``` +taos> show databases; + name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | +==================================================================================================================================================================================================================================================================================== + telegraf | 2022-04-20 08:47:53.488 | 22 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | + log | 2022-04-20 07:19:50.260 | 9 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | +Query OK, 2 row(s) in set (0.002401s) + +taos> use telegraf; +Database changed. + +taos> show stables; + name | created_time | columns | tags | tables | +============================================================================================ + swap | 2022-04-20 08:47:53.532 | 7 | 1 | 1 | + cpu | 2022-04-20 08:48:03.488 | 11 | 2 | 5 | + system | 2022-04-20 08:47:53.512 | 8 | 1 | 1 | + diskio | 2022-04-20 08:47:53.550 | 12 | 2 | 15 | + kernel | 2022-04-20 08:47:53.503 | 6 | 1 | 1 | + mem | 2022-04-20 08:47:53.521 | 35 | 1 | 1 | + processes | 2022-04-20 08:47:53.555 | 12 | 1 | 1 | + disk | 2022-04-20 08:47:53.541 | 8 | 5 | 2 | +Query OK, 8 row(s) in set (0.000521s) + +taos> select * from telegraf.system limit 10; + ts | load1 | load5 | load15 | n_cpus | n_users | uptime | uptime_format | host +| +============================================================================================================================================================================================================================================= + 2022-04-20 08:47:50.000000000 | 0.000000000 | 0.050000000 | 0.070000000 | 4 | 1 | 5533 | 1:32 | shuduo-1804 +| + 2022-04-20 08:48:00.000000000 | 0.000000000 | 0.050000000 | 0.070000000 | 4 | 1 | 5543 | 1:32 | shuduo-1804 +| + 2022-04-20 08:48:10.000000000 | 0.000000000 | 0.040000000 | 0.070000000 | 4 | 1 | 5553 | 1:32 | shuduo-1804 +| +Query OK, 3 row(s) in set (0.013269s) +``` diff --git a/docs/en/08-third-party/04-emq-broker.md b/docs/en/08-third-party/04-emq-broker.md new file mode 100644 index 0000000000000000000000000000000000000000..4e1c129ecf32e39532495a76e670f11e019abce0 --- /dev/null +++ b/docs/en/08-third-party/04-emq-broker.md @@ -0,0 +1,138 @@ +# EMQX Broker + + +MQTT is a popular IoT data transfer protocol. [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software. You can write MQTT data directly to TDengine without any code. You only need to setup "rules" in EMQX Dashboard to create a simple configuration. EMQX supports saving data to TDengine by sending data to a web service and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it.). + +## Prerequisites + +The following preparations are required for EMQX to add TDengine data sources correctly. +- The TDengine cluster is deployed and working properly +- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details. +- If you use the emulated writers described later, you need to install the appropriate version of Node.js. V12 is recommended. + +## Install and start EMQX + +Depending on the current operating system, users can download the installation package from the [EMQX official website](https://www.emqx.io/downloads) and execute the installation. After installation, use `sudo emqx start` or `sudo systemctl start emqx` to start the EMQX service. + + +## Create Database and Table + +In this step we create the appropriate database and table schema in TDengine for receiving MQTT data. Open TDengine CLI and execute SQL bellow: + +```sql +CREATE DATABASE test; +USE test; +CREATE TABLE sensor_data (ts TIMESTAMP, temperature FLOAT, humidity FLOAT, volume FLOAT, pm10 FLOAT, pm25 FLOAT, so2 FLOAT, no2 FLOAT, co FLOAT, sensor_id NCHAR(255), area TINYINT, coll_time TIMESTAMP); +``` + +Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQX + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario. + +## Configuring EMQX Rules + +Since the configuration interface of EMQX differs from version to version, here is v4.4.3 as an example. For other versions, please refer to the corresponding official documentation. + +### Login EMQX Dashboard + +Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`. + +![TDengine Database EMQX login dashboard](./emqx/login-dashboard.webp) + +### Creating Rule + +Select "Rule" in the "Rule Engine" on the left and click the "Create" button: ! + +![TDengine Database EMQX rule engine](./emqx/rule-engine.webp) + +### Edit SQL fields + +Copy SQL bellow and paste it to the SQL edit area: + +```sql +SELECT + payload +FROM + "sensor/data" +``` + +![TDengine Database EMQX create rule](./emqx/create-rule.webp) + +### Add "action handler" + +![TDengine Database EMQX add action handler](./emqx/add-action-handler.webp) + +### Add "Resource" + +![TDengine Database EMQX create resource](./emqx/create-resource.webp) + +Select "Data to Web Service" and click the "New Resource" button. + +### Edit "Resource" + +Select "WebHook" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values. + +![TDengine Database EMQX edit resource](./emqx/edit-resource.webp) + +### Edit "action" + +Edit the resource configuration to add the key/value pairing for Authorization. If you use the default TDengine username and password then the value of key Authorization is: +``` +Basic cm9vdDp0YW9zZGF0YQ== +``` + +Please refer to the [ TDengine REST API documentation ](/reference/rest-api/) for the authorization in details. + +Enter the rule engine replacement template in the message body: + +```sql +INSERT INTO test.sensor_data VALUES( + now, + ${payload.temperature}, + ${payload.humidity}, + ${payload.volume}, + ${payload.PM10}, + ${payload.pm25}, + ${payload.SO2}, + ${payload.NO2}, + ${payload.CO}, + '${payload.id}', + ${payload.area}, + ${payload.ts} +) +``` + +![TDengine Database EMQX edit action](./emqx/edit-action.webp) + +Finally, click the "Create" button at bottom left corner saving the rule. +## Compose program to mock data + +```javascript +{{#include docs/examples/other/mock.js}} +``` + +Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients. + +![TDengine Database EMQX client num](./emqx/client-num.webp) + +## Execute tests to simulate sending MQTT data + +``` +npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org +node mock.js +``` + +![TDengine Database EMQX run mock](./emqx/run-mock.webp) + +## Verify that EMQX is receiving data + +Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly: + +![TDengine Database EMQX rule matched](./emqx/check-rule-matched.webp) + +## Verify that data writing to TDengine + +Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly: + +![TDengine Database EMQX result in taos](./emqx/check-result-in-taos.webp) + +Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine. +EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX. diff --git a/docs/en/08-third-party/emqx/add-action-handler.webp b/docs/en/08-third-party/emqx/add-action-handler.webp new file mode 100644 index 0000000000000000000000000000000000000000..4a8d105f711991226cfbd43b6e9ab07d7ccc686a Binary files /dev/null and b/docs/en/08-third-party/emqx/add-action-handler.webp differ diff --git a/docs/en/08-third-party/emqx/check-result-in-taos.webp b/docs/en/08-third-party/emqx/check-result-in-taos.webp new file mode 100644 index 0000000000000000000000000000000000000000..8fa040a86104fece02ddaf8986f0a67de316143d Binary files /dev/null and b/docs/en/08-third-party/emqx/check-result-in-taos.webp differ diff --git a/docs/en/08-third-party/emqx/check-rule-matched.webp b/docs/en/08-third-party/emqx/check-rule-matched.webp new file mode 100644 index 0000000000000000000000000000000000000000..e5a614035739df859b27c817f3b9f41be444b513 Binary files /dev/null and b/docs/en/08-third-party/emqx/check-rule-matched.webp differ diff --git a/docs/en/08-third-party/emqx/client-num.webp b/docs/en/08-third-party/emqx/client-num.webp new file mode 100644 index 0000000000000000000000000000000000000000..a151b184843607d67b649babb3145bfb3e329cda Binary files /dev/null and b/docs/en/08-third-party/emqx/client-num.webp differ diff --git a/docs/en/08-third-party/emqx/create-resource.webp b/docs/en/08-third-party/emqx/create-resource.webp new file mode 100644 index 0000000000000000000000000000000000000000..bf9cccbe49c57f925c5e6b094a4c0d88a64242cb Binary files /dev/null and b/docs/en/08-third-party/emqx/create-resource.webp differ diff --git a/docs/en/08-third-party/emqx/create-rule.webp b/docs/en/08-third-party/emqx/create-rule.webp new file mode 100644 index 0000000000000000000000000000000000000000..13e8fc83d48d2fd9d0a303c707ef3024d3ee5203 Binary files /dev/null and b/docs/en/08-third-party/emqx/create-rule.webp differ diff --git a/docs/en/08-third-party/emqx/edit-action.webp b/docs/en/08-third-party/emqx/edit-action.webp new file mode 100644 index 0000000000000000000000000000000000000000..7f6d2e36a82b1917930e5d3969115db9359674a0 Binary files /dev/null and b/docs/en/08-third-party/emqx/edit-action.webp differ diff --git a/docs/en/08-third-party/emqx/edit-resource.webp b/docs/en/08-third-party/emqx/edit-resource.webp new file mode 100644 index 0000000000000000000000000000000000000000..fd5d278fab16bba4e04e1c348d4086dce77abb98 Binary files /dev/null and b/docs/en/08-third-party/emqx/edit-resource.webp differ diff --git a/docs/en/08-third-party/emqx/login-dashboard.webp b/docs/en/08-third-party/emqx/login-dashboard.webp new file mode 100644 index 0000000000000000000000000000000000000000..f84cee668fb6efe1586515ba0dee3ae2f10a5b30 Binary files /dev/null and b/docs/en/08-third-party/emqx/login-dashboard.webp differ diff --git a/docs/en/08-third-party/emqx/rule-engine.webp b/docs/en/08-third-party/emqx/rule-engine.webp new file mode 100644 index 0000000000000000000000000000000000000000..c1711c8cc757cd73fef5cb941a1818756241f7f0 Binary files /dev/null and b/docs/en/08-third-party/emqx/rule-engine.webp differ diff --git a/docs/en/08-third-party/emqx/rule-header-key-value.webp b/docs/en/08-third-party/emqx/rule-header-key-value.webp new file mode 100644 index 0000000000000000000000000000000000000000..e645b3822dffec86f4926e78a57eaffa1e7f4d8d Binary files /dev/null and b/docs/en/08-third-party/emqx/rule-header-key-value.webp differ diff --git a/docs/en/08-third-party/emqx/run-mock.webp b/docs/en/08-third-party/emqx/run-mock.webp new file mode 100644 index 0000000000000000000000000000000000000000..ed33f1666d456f1ab40ed6830af4550d4c7ca037 Binary files /dev/null and b/docs/en/08-third-party/emqx/run-mock.webp differ diff --git a/docs/en/09-connector/python.md b/docs/en/09-connector/01-python.md similarity index 100% rename from docs/en/09-connector/python.md rename to docs/en/09-connector/01-python.md diff --git a/docs/en/09-connector/java.md b/docs/en/09-connector/02-java.md similarity index 100% rename from docs/en/09-connector/java.md rename to docs/en/09-connector/02-java.md diff --git a/docs/en/09-connector/03-go.md b/docs/en/09-connector/03-go.md new file mode 100644 index 0000000000000000000000000000000000000000..7e4d6368c5cf2130a0e4abf2c3ba1fe88e16a732 --- /dev/null +++ b/docs/en/09-connector/03-go.md @@ -0,0 +1,4 @@ +--- +sidebar_label: Go +title: TDengine Go Connector +--- \ No newline at end of file diff --git a/docs/en/09-connector/rust.md b/docs/en/09-connector/04-rust.md similarity index 100% rename from docs/en/09-connector/rust.md rename to docs/en/09-connector/04-rust.md diff --git a/docs/en/09-connector/05-node.md b/docs/en/09-connector/05-node.md new file mode 100644 index 0000000000000000000000000000000000000000..5dd09dd4c825ce403c9d2ab2f14f4fe9b7071ec2 --- /dev/null +++ b/docs/en/09-connector/05-node.md @@ -0,0 +1,4 @@ +--- +sidebar_label: Node.js +title: TDengine Node.js Connector +--- \ No newline at end of file diff --git a/docs/examples/other/mock.js b/docs/examples/other/mock.js new file mode 100644 index 0000000000000000000000000000000000000000..136c5afa96425073fc29854708495e98d0b2e743 --- /dev/null +++ b/docs/examples/other/mock.js @@ -0,0 +1,78 @@ +// mock.js +const mqtt = require('mqtt') +const Mock = require('mockjs') +const EMQX_SERVER = 'mqtt://localhost:1883' +const CLIENT_NUM = 10 +const STEP = 5000 // Data interval in ms +const AWAIT = 5000 // Sleep time after data be written once to avoid data writing too fast +const CLIENT_POOL = [] +startMock() +function sleep(timer = 100) { + return new Promise(resolve => { + setTimeout(resolve, timer) + }) +} +async function startMock() { + const now = Date.now() + for (let i = 0; i < CLIENT_NUM; i++) { + const client = await createClient(`mock_client_${i}`) + CLIENT_POOL.push(client) + } + // last 24h every 5s + const last = 24 * 3600 * 1000 + for (let ts = now - last; ts <= now; ts += STEP) { + for (const client of CLIENT_POOL) { + const mockData = generateMockData() + const data = { + ...mockData, + id: client.clientId, + area: 0, + ts, + } + client.publish('sensor/data', JSON.stringify(data)) + } + const dateStr = new Date(ts).toLocaleTimeString() + console.log(`${dateStr} send success.`) + await sleep(AWAIT) + } + console.log(`Done, use ${(Date.now() - now) / 1000}s`) +} +/** + * Init a virtual mqtt client + * @param {string} clientId ClientID + */ +function createClient(clientId) { + return new Promise((resolve, reject) => { + const client = mqtt.connect(EMQX_SERVER, { + clientId, + }) + client.on('connect', () => { + console.log(`client ${clientId} connected`) + resolve(client) + }) + client.on('reconnect', () => { + console.log('reconnect') + }) + client.on('error', (e) => { + console.error(e) + reject(e) + }) + }) +} +/** +* Generate mock data +*/ +function generateMockData() { + return { + "temperature": parseFloat(Mock.Random.float(22, 100).toFixed(2)), + "humidity": parseFloat(Mock.Random.float(12, 86).toFixed(2)), + "volume": parseFloat(Mock.Random.float(20, 200).toFixed(2)), + "PM10": parseFloat(Mock.Random.float(0, 300).toFixed(2)), + "pm25": parseFloat(Mock.Random.float(0, 300).toFixed(2)), + "SO2": parseFloat(Mock.Random.float(0, 50).toFixed(2)), + "NO2": parseFloat(Mock.Random.float(0, 50).toFixed(2)), + "CO": parseFloat(Mock.Random.float(0, 50).toFixed(2)), + "area": Mock.Random.integer(0, 20), + "ts": 1596157444170, + } +} \ No newline at end of file