未验证 提交 51efcf9e 编写于 作者: W wade zhang 提交者: GitHub

Merge pull request #21640 from taosdata/docs/TD-24553

fix: change kafka doc, delete confluent related content
......@@ -16,165 +16,79 @@ TDengine Source Connector is used to read data from TDengine in real-time and se
![TDengine Database Kafka Connector -- streaming integration with kafka connect](kafka/streaming-integration-with-kafka-connect.webp)
## What is Confluent?
[Confluent](https://www.confluent.io/) adds many extensions to Kafka. include:
1. Schema Registry
2. REST Proxy
3. Non-Java Clients
4. Many packaged Kafka Connect plugins
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
![TDengine Database Kafka Connector -- Confluent platform](kafka/confluentPlatform.webp)
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
## Prerequisites
1. Linux operating system
2. Java 8 and Maven installed
3. Git is installed
3. Git/curl/vi is installed
4. TDengine is installed and started. If not, please refer to [Installation and Uninstallation](/operation/pkg-install)
## Install Confluent
Confluent provides two installation methods: Docker and binary packages. This article only introduces binary package installation.
## Install Kafka
Execute in any directory:
````
curl -O http://packages.confluent.io/archive/7.1/confluent-7.1.1.tar.gz
tar xzf confluent-7.1.1.tar.gz -C /opt/
curl -O https://downloads.apache.org/kafka/3.4.0/kafka_2.13-3.4.0.tgz
tar xzf kafka_2.13-3.4.0.tgz -C /opt/
ln -s /opt/kafka_2.13-3.4.0 /opt/kafka
````
Then you need to add the `$CONFLUENT_HOME/bin` directory to the PATH.
Then you need to add the `$KAFKA_HOME/bin` directory to the PATH.
```title=".profile"
export CONFLUENT_HOME=/opt/confluent-7.1.1
export PATH=$CONFLUENT_HOME/bin:$PATH
export KAFKA_HOME=/opt/kafka
export PATH=$PATH:$KAFKA_HOME/bin
```
Users can append the above script to the current user's profile file (~/.profile or ~/.bash_profile)
After the installation is complete, you can enter `confluent version` for simple verification:
```
# confluent version
confluent - Confluent CLI
Version: v2.6.1
Git Ref: 6d920590
Build Date: 2022-02-18T06:14:21Z
Go Version: go1.17.6 (linux/amd64)
Development: false
```
## Install TDengine Connector plugin
### Install from source code
```
```shell
git clone --branch 3.0 https://github.com/taosdata/kafka-connect-tdengine.git
cd kafka-connect-tdengine
mvn clean package
unzip -d $CONFLUENT_HOME/share/java/ target/components/packages/taosdata-kafka-connect-tdengine-*.zip
mvn clean package -Dmaven.test.skip=true
unzip -d $KAFKA_HOME/components/ target/components/packages/taosdata-kafka-connect-tdengine-*.zip
```
The above script first clones the project source code and then compiles and packages it with Maven. After the package is complete, the zip package of the plugin is generated in the `target/components/packages/` directory. Unzip this zip package to plugin path. We used `$CONFLUENT_HOME/share/java/` above because it's a build in plugin path.
### Install with confluent-hub
The above script first clones the project source code and then compiles and packages it with Maven. After the package is complete, the zip package of the plugin is generated in the `target/components/packages/` directory. Unzip this zip package to plugin path. We used `$KAFKA_HOME/components/` above because it's a build in plugin path.
[Confluent Hub](https://www.confluent.io/hub) provides a service to download Kafka Connect plugins. After TDengine Kafka Connector is published to Confluent Hub, it can be installed using the command tool `confluent-hub`.
**TDengine Kafka Connector is currently not officially released and cannot be installed in this way**.
### Add configuration file
## Start Confluent
add kafka-connect-tdengine plugin path to `plugin.path` in `$KAFKA_HOME/config/connect-distributed.properties`.
```
confluent local services start
```properties
plugin.path=/usr/share/java,/opt/kafka/components
```
:::note
Be sure to install the plugin before starting Confluent. Otherwise, Kafka Connect will fail to discover the plugins.
:::
## Start Kafka Services
:::tip
If a component fails to start, try clearing the data and restarting. The data directory will be printed to the console at startup, e.g.:
```title="Console output log" {1}
Using CONFLUENT_CURRENT: /tmp/confluent.106668
Starting ZooKeeper
ZooKeeper is [UP]
Starting Kafka
Kafka is [UP]
Starting Schema Registry
Schema Registry is [UP]
Starting Kafka REST
Kafka REST is [UP]
Starting Connect
Connect is [UP]
Starting ksqlDB Server
ksqlDB Server is [UP]
Starting Control Center
Control Center is [UP]
```
Use command bellow to start all services:
To clear data, execute `rm -rf /tmp/confluent.106668`.
:::
```shell
zookeeper-server-start.sh -daemon $KAFKA_HOME/config/zookeeper.properties
### Check Confluent Services Status
kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
Use command bellow to check the status of all service:
connect-distributed.sh -daemon $KAFKA_HOME/config/connect-distributed.properties
```
confluent local services status
```
The expected output is:
```
Connect is [UP]
Control Center is [UP]
Kafka is [UP]
Kafka REST is [UP]
ksqlDB Server is [UP]
Schema Registry is [UP]
ZooKeeper is [UP]
```
### Check Successfully Loaded Plugin
After Kafka Connect was completely started, you can use bellow command to check if our plugins are installed successfully:
```
confluent local services connect plugin list
```
The output should contains `TDengineSinkConnector` and `TDengineSourceConnector` as bellow:
```
Available Connect Plugins:
[
{
"class": "com.taosdata.kafka.connect.sink.TDengineSinkConnector",
"type": "sink",
"version": "1.0.0"
},
{
"class": "com.taosdata.kafka.connect.source.TDengineSourceConnector",
"type": "source",
"version": "1.0.0"
},
......
```shell
curl http://localhost:8083/connectors
```
If not, please check the log file of Kafka Connect. To view the log file path, please execute:
The output as bellow:
```txt
[]
```
echo `cat /tmp/confluent.current`/connect/connect.stdout
```
It should produce a path like:`/tmp/confluent.104086/connect/connect.stdout`
Besides log file `connect.stdout` there is a file named `connect.properties`. At the end of this file you can see the effective `plugin.path` which is a series of paths joined by comma. If Kafka Connect not found our plugins, it's probably because the installed path is not included in `plugin.path`.
## The use of TDengine Sink Connector
......@@ -184,40 +98,47 @@ TDengine Sink Connector internally uses TDengine [modeless write interface](/ref
The following example synchronizes the data of the topic meters to the target database power. The data format is the InfluxDB Line protocol format.
### Add configuration file
### Add Sink Connector configuration file
```
```shell
mkdir ~/test
cd ~/test
vi sink-demo.properties
vi sink-demo.json
```
sink-demo.properties' content is following:
```ini title="sink-demo.properties"
name=TDengineSinkConnector
connector.class=com.taosdata.kafka.connect.sink.TDengineSinkConnector
tasks.max=1
topics=meters
connection.url=jdbc:TAOS://127.0.0.1:6030
connection.user=root
connection.password=taosdata
connection.database=power
db.schemaless=line
data.precision=ns
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
sink-demo.json' content is following:
```json title="sink-demo.json"
{
"name": "TDengineSinkConnector",
"config": {
"connector.class":"com.taosdata.kafka.connect.sink.TDengineSinkConnector",
"tasks.max": "1",
"topics": "meters",
"connection.url": "jdbc:TAOS://127.0.0.1:6030",
"connection.user": "root",
"connection.password": "taosdata",
"connection.database": "power",
"db.schemaless": "line",
"data.precision": "ns",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "dead_letter_topic",
"errors.deadletterqueue.topic.replication.factor": 1
}
}
```
Key configuration instructions:
1. `topics=meters` and `connection.database=power` means to subscribe to the data of the topic meters and write to the database power.
2. `db.schemaless=line` means the data in the InfluxDB Line protocol format.
1. `"topics": "meters"` and `"connection.database": "power"` means to subscribe to the data of the topic meters and write to the database power.
2. `"db.schemaless": "line"` means the data in the InfluxDB Line protocol format.
### Create Connector instance
### Create Sink Connector instance
````
confluent local services connect connector load TDengineSinkConnector --config ./sink-demo.properties
````shell
curl -X POST -d @sink-demo.json http://localhost:8083/connectors -H "Content-Type: application/json"
````
If the above command is executed successfully, the output is as follows:
......@@ -237,7 +158,10 @@ If the above command is executed successfully, the output is as follows:
"tasks.max": "1",
"topics": "meters",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"name": "TDengineSinkConnector"
"name": "TDengineSinkConnector",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "dead_letter_topic",
"errors.deadletterqueue.topic.replication.factor": "1",
},
"tasks": [],
"type": "sink"
......@@ -258,7 +182,7 @@ meters,location=California.LoSangeles,groupid=3 current=11.3,voltage=221,phase=0
Use kafka-console-producer to write test data to the topic `meters`.
```
cat test-data.txt | kafka-console-producer --broker-list localhost:9092 --topic meters
cat test-data.txt | kafka-console-producer.sh --broker-list localhost:9092 --topic meters
```
:::note
......@@ -269,12 +193,12 @@ TDengine Sink Connector will automatically create the database if the target dat
Use the TDengine CLI to verify that the sync was successful.
```
```sql
taos> use power;
Database changed.
taos> select * from meters;
ts | current | voltage | phase | groupid | location |
_ts | current | voltage | phase | groupid | location |
===============================================================================================================================================================
2022-03-28 09:56:51.249000000 | 11.800000000 | 221.000000000 | 0.280000000 | 2 | California.LosAngeles |
2022-03-28 09:56:51.250000000 | 13.400000000 | 223.000000000 | 0.290000000 | 2 | California.LosAngeles |
......@@ -293,29 +217,34 @@ TDengine Source Connector will convert the data in TDengine data table into [Inf
The following sample program synchronizes the data in the database test to the topic tdengine-source-test.
### Add configuration file
### Add Source Connector configuration file
```
vi source-demo.properties
```shell
vi source-demo.json
```
Input following content:
```ini title="source-demo.properties"
name=TDengineSourceConnector
connector.class=com.taosdata.kafka.connect.source.TDengineSourceConnector
tasks.max=1
connection.url=jdbc:TAOS://127.0.0.1:6030
connection.username=root
connection.password=taosdata
connection.database=test
connection.attempts=3
connection.backoff.ms=5000
topic.prefix=tdengine-source-
poll.interval.ms=1000
fetch.max.rows=100
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
```json title="source-demo.json"
{
"name":"TDengineSourceConnector",
"config":{
"connector.class": "com.taosdata.kafka.connect.source.TDengineSourceConnector",
"tasks.max": 1,
"connection.url": "jdbc:TAOS://127.0.0.1:6030",
"connection.username": "root",
"connection.password": "taosdata",
"connection.database": "test",
"connection.attempts": 3,
"connection.backoff.ms": 5000,
"topic.prefix": "tdengine-source",
"poll.interval.ms": 1000,
"fetch.max.rows": 100,
"topic.per.stable": true,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter"
}
}
```
### Prepare test data
......@@ -340,40 +269,40 @@ INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-1
Use TDengine CLI to execute SQL script
```
```shell
taos -f prepare-source-data.sql
```
### Create Connector instance
````
confluent local services connect connector load TDengineSourceConnector --config source-demo.properties
````
```shell
curl -X POST -d @source-demo.json http://localhost:8083/connectors -H "Content-Type: application/json"
```
### View topic data
Use the kafka-console-consumer command-line tool to monitor data in the topic tdengine-source-test. In the beginning, all historical data will be output. After inserting two new data into TDengine, kafka-console-consumer immediately outputs the two new data. The output is in InfluxDB line protocol format.
````
kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic tdengine-source-test
````shell
kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic tdengine-source-test-meters
````
output:
````
```txt
......
meters,location="California.SanFrancisco",groupid=2i32 current=10.3f32,voltage=219i32,phase=0.31f32 1538548685000000000
meters,location="California.SanFrancisco",groupid=2i32 current=12.6f32,voltage=218i32,phase=0.33f32 1538548695000000000
......
````
```
All historical data is displayed. Switch to the TDengine CLI and insert two new pieces of data:
````
```sql
USE test;
INSERT INTO d1001 VALUES (now, 13.3, 229, 0.38);
INSERT INTO d1002 VALUES (now, 16.3, 233, 0.22);
````
```
Switch back to kafka-console-consumer, and the command line window has printed out the two pieces of data just inserted.
......@@ -383,16 +312,16 @@ After testing, use the unload command to stop the loaded connector.
View currently active connectors:
````
confluent local services connect connector status
````
```shell
curl http://localhost:8083/connectors
```
You should now have two active connectors if you followed the previous steps. Use the following command to unload:
````
confluent local services connect connector unload TDengineSinkConnector
confluent local services connect connector unload TDengineSourceConnector
````
```shell
curl -X DELETE http://localhost:8083/connectors/TDengineSinkConnector
curl -X DELETE http://localhost:8083/connectors/TDengineSourceConnector
```
## Configuration reference
......@@ -430,19 +359,14 @@ The following configuration items apply to TDengine Sink Connector and TDengine
6. `query.interval.ms`: The time range of reading data from TDengine each time, its unit is millisecond. It should be adjusted according to the data flow in rate, the default value is 1000.
7. `topic.per.stable`: If it's set to true, it means one super table in TDengine corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix>-<connection.database>-<stable.name>`; if it's set to false, it means the whole DB corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix>-<connection.database>`.
## Other notes
1. To install plugin to a customized location, refer to https://docs.confluent.io/home/connect/self-managed/install.html#install-connector-manually.
2. To use Kafka Connect without confluent, refer to https://kafka.apache.org/documentation/#connect.
1. To use Kafka Connect, refer to <https://kafka.apache.org/documentation/#connect>.
## Feedback
https://github.com/taosdata/kafka-connect-tdengine/issues
<https://github.com/taosdata/kafka-connect-tdengine/issues>
## Reference
1. https://www.confluent.io/what-is-apache-kafka
2. https://developer.confluent.io/learn-kafka/kafka-connect/intro
3. https://docs.confluent.io/platform/current/platform.html
1. For more information, see <https://kafka.apache.org/documentation/>
......@@ -16,169 +16,78 @@ TDengine Source Connector 用于把数据实时地从 TDengine 读出来发送
![TDengine Database Kafka Connector -- streaming integration with kafka connect](kafka/streaming-integration-with-kafka-connect.webp)
## 什么是 Confluent?
[Confluent](https://www.confluent.io/) 在 Kafka 的基础上增加很多扩展功能。包括:
1. Schema Registry
2. REST 代理
3. 非 Java 客户端
4. 很多打包好的 Kafka Connect 插件
5. 管理和监控 Kafka 的 GUI —— Confluent 控制中心
这些扩展功能有的包含在社区版本的 Confluent 中,有的只有企业版能用。
![TDengine Database Kafka Connector -- Confluent introduction](kafka/confluentPlatform.webp)
Confluent 企业版提供了 `confluent` 命令行工具管理各个组件。
## 前置条件
运行本教程中示例的前提条件。
1. Linux 操作系统
2. 已安装 Java 8 和 Maven
3. 已安装 Git
3. 已安装 Git、curl、vi
4. 已安装并启动 TDengine。如果还没有可参考[安装和卸载](/operation/pkg-install)
## 安装 Confluent
Confluent 提供了 Docker 和二进制包两种安装方式。本文仅介绍二进制包方式安装。
## 安装 Kafka
在任意目录下执行:
```
curl -O http://packages.confluent.io/archive/7.1/confluent-7.1.1.tar.gz
tar xzf confluent-7.1.1.tar.gz -C /opt/
```shell
curl -O https://downloads.apache.org/kafka/3.4.0/kafka_2.13-3.4.0.tgz
tar xzf kafka_2.13-3.4.0.tgz -C /opt/
ln -s /opt/kafka_2.13-3.4.0 /opt/kafka
```
然后需要把 `$CONFLUENT_HOME/bin` 目录加入 PATH。
然后需要把 `$KAFKA_HOME/bin` 目录加入 PATH。
```title=".profile"
export CONFLUENT_HOME=/opt/confluent-7.1.1
export PATH=$CONFLUENT_HOME/bin:$PATH
export KAFKA_HOME=/opt/kafka
export PATH=$PATH:$KAFKA_HOME/bin
```
以上脚本可以追加到当前用户的 profile 文件(~/.profile 或 ~/.bash_profile)
安装完成之后,可以输入`confluent version`做简单验证:
```
# confluent version
confluent - Confluent CLI
Version: v2.6.1
Git Ref: 6d920590
Build Date: 2022-02-18T06:14:21Z
Go Version: go1.17.6 (linux/amd64)
Development: false
```
## 安装 TDengine Connector 插件
### 从源码安装
### 编译插件
```
```shell
git clone --branch 3.0 https://github.com/taosdata/kafka-connect-tdengine.git
cd kafka-connect-tdengine
mvn clean package
unzip -d $CONFLUENT_HOME/share/java/ target/components/packages/taosdata-kafka-connect-tdengine-*.zip
mvn clean package -Dmaven.test.skip=true
unzip -d $KAFKA_HOME/components/ target/components/packages/taosdata-kafka-connect-tdengine-*.zip
```
以上脚本先 clone 项目源码,然后用 Maven 编译打包。打包完成后在 `target/components/packages/` 目录生成了插件的 zip 包。把这个 zip 包解压到安装插件的路径即可。上面的示例中使用了内置的插件安装路径: `$CONFLUENT_HOME/share/java/`
### 用 confluent-hub 安装
[Confluent Hub](https://www.confluent.io/hub) 提供下载 Kafka Connect 插件的服务。在 TDengine Kafka Connector 发布到 Confluent Hub 后可以使用命令工具 `confluent-hub` 安装。
**TDengine Kafka Connector 目前没有正式发布,不能用这种方式安装**
以上脚本先 clone 项目源码,然后用 Maven 编译打包。打包完成后在 `target/components/packages/` 目录生成了插件的 zip 包。把这个 zip 包解压到安装插件的路径即可。上面的示例中使用了内置的插件安装路径: `$KAFKA_HOME/components/`
## 启动 Confluent
### 配置插件
```
confluent local services start
```
:::note
一定要先安装插件再启动 Confluent, 否则加载插件会失败。
:::
将 kafka-connect-tdengine 插件加入 `$KAFKA_HOME/config/connect-distributed.properties` 配置文件 plugin.path 中
:::tip
若某组件启动失败,可尝试清空数据,重新启动。数据目录在启动时将被打印到控制台,比如 :
```title="控制台输出日志" {1}
Using CONFLUENT_CURRENT: /tmp/confluent.106668
Starting ZooKeeper
ZooKeeper is [UP]
Starting Kafka
Kafka is [UP]
Starting Schema Registry
Schema Registry is [UP]
Starting Kafka REST
Kafka REST is [UP]
Starting Connect
Connect is [UP]
Starting ksqlDB Server
ksqlDB Server is [UP]
Starting Control Center
Control Center is [UP]
```properties
plugin.path=/usr/share/java,/opt/kafka/components
```
清空数据可执行 `rm -rf /tmp/confluent.106668`
:::
## 启动 Kafka
### 验证各个组件是否启动成功
```shell
zookeeper-server-start.sh -daemon $KAFKA_HOME/config/zookeeper.properties
输入命令:
kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties
```
confluent local services status
```
如果各组件都启动成功,会得到如下输出:
```
Connect is [UP]
Control Center is [UP]
Kafka is [UP]
Kafka REST is [UP]
ksqlDB Server is [UP]
Schema Registry is [UP]
ZooKeeper is [UP]
connect-distributed.sh -daemon $KAFKA_HOME/config/connect-distributed.properties
```
### 验证插件是否安装成功
### 验证 kafka Connect 是否启动成功
在 Kafka Connect 组件完全启动后,可用以下命令列出成功加载的插件
输入命令
```shell
curl http://localhost:8083/connectors
```
confluent local services connect plugin list
```
如果成功安装,会输出如下:
```txt {4,9}
Available Connect Plugins:
[
{
"class": "com.taosdata.kafka.connect.sink.TDengineSinkConnector",
"type": "sink",
"version": "1.0.0"
},
{
"class": "com.taosdata.kafka.connect.source.TDengineSourceConnector",
"type": "source",
"version": "1.0.0"
},
......
```
如果各组件都启动成功,会得到如下输出:
如果插件安装失败,请检查 Kafka Connect 的启动日志是否有异常信息,用以下命令输出日志路径:
```
echo `cat /tmp/confluent.current`/connect/connect.stdout
```txt
[]
```
该命令的输出类似: `/tmp/confluent.104086/connect/connect.stdout`
与日志文件 `connect.stdout` 同一目录,还有一个文件名为: `connect.properties`。在这个文件的末尾,可以看到最终生效的 `plugin.path`, 它是一系列用逗号分割的路径。如果插件安装失败,很可能是因为实际的安装路径不包含在 `plugin.path` 中。
## TDengine Sink Connector 的使用
......@@ -188,40 +97,47 @@ TDengine Sink Connector 内部使用 TDengine [无模式写入接口](../../conn
下面的示例将主题 meters 的数据,同步到目标数据库 power。数据格式为 InfluxDB Line 协议格式。
### 添加配置文件
### 添加 Sink Connector 配置文件
```
```shell
mkdir ~/test
cd ~/test
vi sink-demo.properties
vi sink-demo.json
```
sink-demo.properties 内容如下:
```ini title="sink-demo.properties"
name=TDengineSinkConnector
connector.class=com.taosdata.kafka.connect.sink.TDengineSinkConnector
tasks.max=1
topics=meters
connection.url=jdbc:TAOS://127.0.0.1:6030
connection.user=root
connection.password=taosdata
connection.database=power
db.schemaless=line
data.precision=ns
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
sink-demo.json 内容如下:
```json title="sink-demo.json"
{
"name": "TDengineSinkConnector",
"config": {
"connector.class":"com.taosdata.kafka.connect.sink.TDengineSinkConnector",
"tasks.max": "1",
"topics": "meters",
"connection.url": "jdbc:TAOS://127.0.0.1:6030",
"connection.user": "root",
"connection.password": "taosdata",
"connection.database": "power",
"db.schemaless": "line",
"data.precision": "ns",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "dead_letter_topic",
"errors.deadletterqueue.topic.replication.factor": 1
}
}
```
关键配置说明:
1. `topics=meters``connection.database=power`, 表示订阅主题 meters 的数据,并写入数据库 power。
2. `db.schemaless=line`, 表示使用 InfluxDB Line 协议格式的数据。
1. `"topics": "meters"``"connection.database": "power"`, 表示订阅主题 meters 的数据,并写入数据库 power。
2. `"db.schemaless": "line"`, 表示使用 InfluxDB Line 协议格式的数据。
### 创建 Connector 实例
### 创建 Sink Connector 实例
```
confluent local services connect connector load TDengineSinkConnector --config ./sink-demo.properties
```shell
curl -X POST -d @sink-demo.json http://localhost:8083/connectors -H "Content-Type: application/json"
```
若以上命令执行成功,则有如下输出:
......@@ -241,7 +157,10 @@ confluent local services connect connector load TDengineSinkConnector --config .
"tasks.max": "1",
"topics": "meters",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"name": "TDengineSinkConnector"
"name": "TDengineSinkConnector",
"errors.tolerance": "all",
"errors.deadletterqueue.topic.name": "dead_letter_topic",
"errors.deadletterqueue.topic.replication.factor": "1",
},
"tasks": [],
"type": "sink"
......@@ -261,8 +180,8 @@ meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0
使用 kafka-console-producer 向主题 meters 添加测试数据。
```
cat test-data.txt | kafka-console-producer --broker-list localhost:9092 --topic meters
```shell
cat test-data.txt | kafka-console-producer.sh --broker-list localhost:9092 --topic meters
```
:::note
......@@ -273,12 +192,12 @@ cat test-data.txt | kafka-console-producer --broker-list localhost:9092 --topic
使用 TDengine CLI 验证同步是否成功。
```
```sql
taos> use power;
Database changed.
taos> select * from meters;
ts | current | voltage | phase | groupid | location |
_ts | current | voltage | phase | groupid | location |
===============================================================================================================================================================
2022-03-28 09:56:51.249000000 | 11.800000000 | 221.000000000 | 0.280000000 | 2 | California.LosAngeles |
2022-03-28 09:56:51.250000000 | 13.400000000 | 223.000000000 | 0.290000000 | 2 | California.LosAngeles |
......@@ -297,29 +216,34 @@ TDengine Source Connector 会将 TDengine 数据表中的数据转换成 [Influx
下面的示例程序同步数据库 test 中的数据到主题 tdengine-source-test。
### 添加配置文件
### 添加 Source Connector 配置文件
```
vi source-demo.properties
```shell
vi source-demo.json
```
输入以下内容:
```ini title="source-demo.properties"
name=TDengineSourceConnector
connector.class=com.taosdata.kafka.connect.source.TDengineSourceConnector
tasks.max=1
connection.url=jdbc:TAOS://127.0.0.1:6030
connection.username=root
connection.password=taosdata
connection.database=test
connection.attempts=3
connection.backoff.ms=5000
topic.prefix=tdengine-source-
poll.interval.ms=1000
fetch.max.rows=100
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
```json title="source-demo.json"
{
"name":"TDengineSourceConnector",
"config":{
"connector.class": "com.taosdata.kafka.connect.source.TDengineSourceConnector",
"tasks.max": 1,
"connection.url": "jdbc:TAOS://127.0.0.1:6030",
"connection.username": "root",
"connection.password": "taosdata",
"connection.database": "test",
"connection.attempts": 3,
"connection.backoff.ms": 5000,
"topic.prefix": "tdengine-source",
"poll.interval.ms": 1000,
"fetch.max.rows": 100,
"topic.per.stable": true,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter"
}
}
```
### 准备测试数据
......@@ -344,27 +268,27 @@ INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-1
使用 TDengine CLI, 执行 SQL 文件。
```
```shell
taos -f prepare-source-data.sql
```
### 创建 Connector 实例
### 创建 Source Connector 实例
```
confluent local services connect connector load TDengineSourceConnector --config source-demo.properties
```shell
curl -X POST -d @source-demo.json http://localhost:8083/connectors -H "Content-Type: application/json"
```
### 查看 topic 数据
使用 kafka-console-consumer 命令行工具监控主题 tdengine-source-test 中的数据。一开始会输出所有历史数据, 往 TDengine 插入两条新的数据之后,kafka-console-consumer 也立即输出了新增的两条数据。 输出数据 InfluxDB line protocol 的格式。
```
kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic tdengine-source-test
```shell
kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic tdengine-source-test-meters
```
输出:
```
```txt
......
meters,location="California.SanFrancisco",groupid=2i32 current=10.3f32,voltage=219i32,phase=0.31f32 1538548685000000000
meters,location="California.SanFrancisco",groupid=2i32 current=12.6f32,voltage=218i32,phase=0.33f32 1538548695000000000
......@@ -373,7 +297,7 @@ meters,location="California.SanFrancisco",groupid=2i32 current=12.6f32,voltage=2
此时会显示所有历史数据。切换到 TDengine CLI, 插入两条新的数据:
```
```sql
USE test;
INSERT INTO d1001 VALUES (now, 13.3, 229, 0.38);
INSERT INTO d1002 VALUES (now, 16.3, 233, 0.22);
......@@ -387,15 +311,15 @@ INSERT INTO d1002 VALUES (now, 16.3, 233, 0.22);
查看当前活跃的 connector:
```
confluent local services connect connector status
```shell
curl http://localhost:8083/connectors
```
如果按照前述操作,此时应有两个活跃的 connector。使用下面的命令 unload:
```
confluent local services connect connector unload TDengineSinkConnector
confluent local services connect connector unload TDengineSourceConnector
```shell
curl -X DELETE http://localhost:8083/connectors/TDengineSinkConnector
curl -X DELETE http://localhost:8083/connectors/TDengineSourceConnector
```
## 配置参考
......@@ -442,15 +366,12 @@ confluent local services connect connector unload TDengineSourceConnector
## 其他说明
1. 插件的安装位置可以自定义,请参考官方文档:https://docs.confluent.io/home/connect/self-managed/install.html#install-connector-manually。
2. 本教程的示例程序使用了 Confluent 平台,但是 TDengine Kafka Connector 本身同样适用于独立安装的 Kafka, 且配置方法相同。关于如何在独立安装的 Kafka 环境使用 Kafka Connect 插件, 请参考官方文档: https://kafka.apache.org/documentation/#connect。
1. 关于如何在独立安装的 Kafka 环境使用 Kafka Connect 插件, 请参考官方文档:<https://kafka.apache.org/documentation/#connect>
## 问题反馈
无论遇到任何问题,都欢迎在本项目的 Github 仓库反馈: https://github.com/taosdata/kafka-connect-tdengine/issues
无论遇到任何问题,都欢迎在本项目的 Github 仓库反馈:<https://github.com/taosdata/kafka-connect-tdengine/issues>
## 参考
1. https://www.confluent.io/what-is-apache-kafka
2. https://developer.confluent.io/learn-kafka/kafka-connect/intro
3. https://docs.confluent.io/platform/current/platform.html
1. <https://kafka.apache.org/documentation/>
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册