提交 d093b0e2 编写于 作者: K kailixu

chore: merge 3.0

...@@ -16,7 +16,6 @@ debug/ ...@@ -16,7 +16,6 @@ debug/
release/ release/
target/ target/
debs/ debs/
deps/
rpms/ rpms/
mac/ mac/
*.pyc *.pyc
...@@ -131,3 +130,4 @@ tools/BUGS ...@@ -131,3 +130,4 @@ tools/BUGS
tools/taos-tools tools/taos-tools
tools/taosws-rs tools/taosws-rs
tags tags
.clangd
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
- id: check-json
- id: end-of-file-fixer
- id: trailing-whitespace
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
repos:
- repo: https://github.com/pocc/pre-commit-hooks
rev: master
hooks:
- id: cppcheck
args: ["--error-exitcode=0"]
repos:
- repo: https://github.com/crate-ci/typos
rev: v1.15.7
hooks:
- id: typos
...@@ -15,11 +15,15 @@ SET(TD_COMMUNITY_DIR ${PROJECT_SOURCE_DIR}) ...@@ -15,11 +15,15 @@ SET(TD_COMMUNITY_DIR ${PROJECT_SOURCE_DIR})
set(TD_SUPPORT_DIR "${TD_SOURCE_DIR}/cmake") set(TD_SUPPORT_DIR "${TD_SOURCE_DIR}/cmake")
set(TD_CONTRIB_DIR "${TD_SOURCE_DIR}/contrib") set(TD_CONTRIB_DIR "${TD_SOURCE_DIR}/contrib")
include(${TD_SUPPORT_DIR}/cmake.platform) include(${TD_SUPPORT_DIR}/cmake.platform)
include(${TD_SUPPORT_DIR}/cmake.define) include(${TD_SUPPORT_DIR}/cmake.define)
include(${TD_SUPPORT_DIR}/cmake.options) include(${TD_SUPPORT_DIR}/cmake.options)
include(${TD_SUPPORT_DIR}/cmake.version) include(${TD_SUPPORT_DIR}/cmake.version)
# contrib # contrib
add_subdirectory(contrib) add_subdirectory(contrib)
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
注意:修改文档的分支要以`docs/`为开头,以免进行不必要的测试。 注意:修改文档的分支要以`docs/`为开头,以免进行不必要的测试。
4. 创建pull request,将自己的分支合并到开发分支`3.0`,我们开发团队将尽快审核。 4. 创建pull request,将自己的分支合并到开发分支`3.0`,我们开发团队将尽快审核。
如遇任何问题,请添加官方微信TDengineECO。我们的团队会帮忙解决。 如遇任何问题,请添加官方微信 tdengine1。我们的团队会帮忙解决。
## 给贡献者的礼品 ## 给贡献者的礼品
...@@ -48,4 +48,4 @@ TDengine 社区致力于让更多的开发者理解和使用它。 ...@@ -48,4 +48,4 @@ TDengine 社区致力于让更多的开发者理解和使用它。
## 联系我们 ## 联系我们
如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:TDengineECO 如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:tdengine1。
...@@ -314,7 +314,7 @@ def pre_test_build_win() { ...@@ -314,7 +314,7 @@ def pre_test_build_win() {
cd %WIN_CONNECTOR_ROOT% cd %WIN_CONNECTOR_ROOT%
python.exe -m pip install --upgrade pip python.exe -m pip install --upgrade pip
python -m pip uninstall taospy -y python -m pip uninstall taospy -y
python -m pip install taospy==2.7.6 python -m pip install taospy==2.7.10
xcopy /e/y/i/f %WIN_INTERNAL_ROOT%\\debug\\build\\lib\\taos.dll C:\\Windows\\System32 xcopy /e/y/i/f %WIN_INTERNAL_ROOT%\\debug\\build\\lib\\taos.dll C:\\Windows\\System32
''' '''
return 1 return 1
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop) [![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
简体中文 | [English](README.md) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/) 简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# TDengine 简介 # TDengine 简介
...@@ -68,14 +68,14 @@ sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-d ...@@ -68,14 +68,14 @@ sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-d
```bash ```bash
sudo yum install epel-release sudo yum install epel-release
sudo yum update sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 git openssl-devel sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
``` ```
### CentOS 8 & Fedora ### CentOS 8/Fedora/Rocky Linux
```bash ```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
``` ```
#### 在 CentOS 上构建 taosTools 安装依赖软件 #### 在 CentOS 上构建 taosTools 安装依赖软件
...@@ -88,7 +88,7 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel ...@@ -88,7 +88,7 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
``` ```
#### CentOS 8/Rocky Linux #### CentOS 8/Fedora/Rocky Linux
``` ```
sudo yum install -y epel-release sudo yum install -y epel-release
...@@ -101,7 +101,7 @@ sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson ...@@ -101,7 +101,7 @@ sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson
若 powertools 安装失败,可以尝试改用: 若 powertools 安装失败,可以尝试改用:
``` ```
sudo yum config-manager --set-enabled Powertools sudo yum config-manager --set-enabled powertools
``` ```
#### CentOS + devtoolset #### CentOS + devtoolset
...@@ -117,7 +117,7 @@ scl enable devtoolset-9 -- bash ...@@ -117,7 +117,7 @@ scl enable devtoolset-9 -- bash
### macOS ### macOS
``` ```
brew install argp-standalone pkgconfig brew install argp-standalone gflags pkgconfig
``` ```
### 设置 golang 开发环境 ### 设置 golang 开发环境
......
...@@ -76,14 +76,14 @@ sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-d ...@@ -76,14 +76,14 @@ sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-d
```bash ```bash
sudo yum install epel-release sudo yum install epel-release
sudo yum update sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 git openssl-devel sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
``` ```
### CentOS 8 & Fedora ### CentOS 8/Fedora/Rocky Linux
```bash ```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel sudo dnf install -y gcc gcc-c++ make cmake epel-release gflags git openssl-devel
``` ```
#### Install build dependencies for taosTools on CentOS #### Install build dependencies for taosTools on CentOS
...@@ -94,7 +94,7 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel ...@@ -94,7 +94,7 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
``` ```
#### CentOS 8/Rocky Linux #### CentOS 8/Fedora/Rocky Linux
``` ```
sudo yum install -y epel-release sudo yum install -y epel-release
...@@ -124,7 +124,7 @@ scl enable devtoolset-9 -- bash ...@@ -124,7 +124,7 @@ scl enable devtoolset-9 -- bash
### macOS ### macOS
``` ```
brew install argp-standalone pkgconfig brew install argp-standalone gflags pkgconfig
``` ```
### Setup golang environment ### Setup golang environment
......
cmake_minimum_required(VERSION 3.0) cmake_minimum_required(VERSION 3.0)
set(CMAKE_VERBOSE_MAKEFILE OFF) set(CMAKE_VERBOSE_MAKEFILE ON)
set(TD_BUILD_TAOSA_INTERNAL FALSE) set(TD_BUILD_TAOSA_INTERNAL FALSE)
#set output directory #set output directory
...@@ -115,15 +115,6 @@ ELSE () ...@@ -115,15 +115,6 @@ ELSE ()
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${GCC_COVERAGE_COMPILE_FLAGS} ${GCC_COVERAGE_LINK_FLAGS}") SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${GCC_COVERAGE_COMPILE_FLAGS} ${GCC_COVERAGE_LINK_FLAGS}")
ENDIF () ENDIF ()
IF (${BUILD_SANITIZER})
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
MESSAGE(STATUS "Compile with Address Sanitizer!")
ELSE ()
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
ENDIF ()
# disable all assert # disable all assert
IF ((${DISABLE_ASSERT} MATCHES "true") OR (${DISABLE_ASSERTS} MATCHES "true")) IF ((${DISABLE_ASSERT} MATCHES "true") OR (${DISABLE_ASSERTS} MATCHES "true"))
ADD_DEFINITIONS(-DDISABLE_ASSERT) ADD_DEFINITIONS(-DDISABLE_ASSERT)
...@@ -165,4 +156,20 @@ ELSE () ...@@ -165,4 +156,20 @@ ELSE ()
MESSAGE(STATUS "SIMD instructions (FMA/AVX/AVX2) is ACTIVATED") MESSAGE(STATUS "SIMD instructions (FMA/AVX/AVX2) is ACTIVATED")
ENDIF() ENDIF()
# build mode
SET(CMAKE_C_FLAGS_REL "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS_REL "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
IF (${BUILD_SANITIZER})
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
MESSAGE(STATUS "Compile with Address Sanitizer!")
ELSEIF (${BUILD_RELEASE})
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS_REL}")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL}")
ELSE ()
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -g3 -gdwarf-2 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-reserved-user-defined-literal -g3 -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
ENDIF ()
ENDIF () ENDIF ()
...@@ -64,12 +64,25 @@ IF(${TD_WINDOWS}) ...@@ -64,12 +64,25 @@ IF(${TD_WINDOWS})
ON ON
) )
MESSAGE("build geos Win32")
option(
BUILD_GEOS
"If build geos on Windows"
ON
)
ELSEIF (TD_DARWIN_64) ELSEIF (TD_DARWIN_64)
IF(${BUILD_TEST}) IF(${BUILD_TEST})
add_definitions(-DCOMPILER_SUPPORTS_CXX13) add_definitions(-DCOMPILER_SUPPORTS_CXX13)
ENDIF () ENDIF ()
ENDIF () ENDIF ()
option(
BUILD_GEOS
"If build geos on Windows"
ON
)
option( option(
BUILD_SHARED_LIBS BUILD_SHARED_LIBS
"" ""
...@@ -109,7 +122,7 @@ option( ...@@ -109,7 +122,7 @@ option(
option( option(
BUILD_WITH_ROCKSDB BUILD_WITH_ROCKSDB
"If build with rocksdb" "If build with rocksdb"
OFF ON
) )
option( option(
...@@ -171,3 +184,14 @@ option( ...@@ -171,3 +184,14 @@ option(
ON ON
) )
option(
BUILD_RELEASE
"If build release version"
OFF
)
option(
BUILD_CONTRIB
"If build thirdpart from source"
OFF
)
...@@ -121,6 +121,12 @@ IF ("${CPUTYPE}" STREQUAL "") ...@@ -121,6 +121,12 @@ IF ("${CPUTYPE}" STREQUAL "")
SET(TD_LOONGARCH_64 TRUE) SET(TD_LOONGARCH_64 TRUE)
ADD_DEFINITIONS("-D_TD_LOONGARCH_") ADD_DEFINITIONS("-D_TD_LOONGARCH_")
ADD_DEFINITIONS("-D_TD_LOONGARCH_64") ADD_DEFINITIONS("-D_TD_LOONGARCH_64")
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "mips64")
SET(PLATFORM_ARCH_STR "mips")
MESSAGE(STATUS "input cpuType: mips64")
SET(TD_MIPS_64 TRUE)
ADD_DEFINITIONS("-D_TD_MIPS_")
ADD_DEFINITIONS("-D_TD_MIPS_64")
ENDIF () ENDIF ()
ELSE () ELSE ()
# if generate ARM version: # if generate ARM version:
...@@ -162,7 +168,27 @@ ELSE () ...@@ -162,7 +168,27 @@ ELSE ()
ENDIF () ENDIF ()
ENDIF () ENDIF ()
IF(APPLE)
set(CMAKE_THREAD_LIBS_INIT "-lpthread")
set(CMAKE_HAVE_THREADS_LIBRARY 1)
set(CMAKE_USE_WIN32_THREADS_INIT 0)
set(CMAKE_USE_PTHREADS 1)
set(THREADS_PREFER_PTHREAD_FLAG ON)
ENDIF()
MESSAGE(STATUS "Platform arch:" ${PLATFORM_ARCH_STR}) MESSAGE(STATUS "Platform arch:" ${PLATFORM_ARCH_STR})
set(TD_DEPS_DIR "x86")
if (TD_LINUX)
IF (TD_ARM_64 OR TD_ARM_32)
set(TD_DEPS_DIR "arm")
ELSEIF (TD_MIPS_64)
set(TD_DEPS_DIR "mips")
ELSE()
set(TD_DEPS_DIR "x86")
ENDIF()
endif()
MESSAGE(STATUS "DEPS_DIR: " ${TD_DEPS_DIR})
MESSAGE("C Compiler: ${CMAKE_C_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_C_COMPILER_VERSION})") MESSAGE("C Compiler: ${CMAKE_C_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_C_COMPILER_VERSION})")
MESSAGE("CXX Compiler: ${CMAKE_CXX_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_CXX_COMPILER_VERSION})") MESSAGE("CXX Compiler: ${CMAKE_CXX_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_CXX_COMPILER_VERSION})")
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER) IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER}) SET(TD_VER_NUMBER ${VERNUMBER})
ELSE () ELSE ()
SET(TD_VER_NUMBER "3.0.4.1") SET(TD_VER_NUMBER "3.1.0.0.alpha")
ENDIF () ENDIF ()
IF (DEFINED VERCOMPATIBLE) IF (DEFINED VERCOMPATIBLE)
......
# geos
ExternalProject_Add(geos
GIT_REPOSITORY https://github.com/libgeos/geos.git
GIT_TAG 3.12.0
SOURCE_DIR "${TD_CONTRIB_DIR}/geos"
BINARY_DIR ""
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
TEST_COMMAND ""
)
# rocksdb # rocksdb
ExternalProject_Add(rocksdb if (${BUILD_CONTRIB})
GIT_REPOSITORY https://github.com/taosdata-contrib/rocksdb.git ExternalProject_Add(rocksdb
GIT_TAG v6.23.3 URL https://github.com/facebook/rocksdb/archive/refs/tags/v8.1.1.tar.gz
URL_HASH MD5=3b4c97ee45df9c8a5517308d31ab008b
DOWNLOAD_NO_PROGRESS 1
DOWNLOAD_DIR "${TD_CONTRIB_DIR}/deps-download"
SOURCE_DIR "${TD_CONTRIB_DIR}/rocksdb" SOURCE_DIR "${TD_CONTRIB_DIR}/rocksdb"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
BUILD_COMMAND "" BUILD_COMMAND ""
INSTALL_COMMAND "" INSTALL_COMMAND ""
TEST_COMMAND "" TEST_COMMAND ""
) )
else()
if (NOT ${TD_LINUX})
ExternalProject_Add(rocksdb
URL https://github.com/facebook/rocksdb/archive/refs/tags/v8.1.1.tar.gz
URL_HASH MD5=3b4c97ee45df9c8a5517308d31ab008b
DOWNLOAD_NO_PROGRESS 1
DOWNLOAD_DIR "${TD_CONTRIB_DIR}/deps-download"
SOURCE_DIR "${TD_CONTRIB_DIR}/rocksdb"
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
TEST_COMMAND ""
)
endif()
endif()
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
# stub # stub
ExternalProject_Add(stub ExternalProject_Add(stub
GIT_REPOSITORY https://github.com/coolxv/cpp-stub.git GIT_REPOSITORY https://github.com/coolxv/cpp-stub.git
GIT_TAG 5e903b8e
GIT_SUBMODULES "src" GIT_SUBMODULES "src"
SOURCE_DIR "${TD_CONTRIB_DIR}/cpp-stub" SOURCE_DIR "${TD_CONTRIB_DIR}/cpp-stub"
BINARY_DIR "${TD_CONTRIB_DIR}/cpp-stub/src" BINARY_DIR "${TD_CONTRIB_DIR}/cpp-stub/src"
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taosadapter # taosadapter
ExternalProject_Add(taosadapter ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG 565ca21 GIT_TAG main
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taos-tools # taos-tools
ExternalProject_Add(taos-tools ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 4378702 GIT_TAG 3.0
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -77,11 +77,23 @@ if(${BUILD_WITH_LEVELDB}) ...@@ -77,11 +77,23 @@ if(${BUILD_WITH_LEVELDB})
cat("${TD_SUPPORT_DIR}/leveldb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) cat("${TD_SUPPORT_DIR}/leveldb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif(${BUILD_WITH_LEVELDB}) endif(${BUILD_WITH_LEVELDB})
# rocksdb if (${BUILD_CONTRIB})
if(${BUILD_WITH_ROCKSDB}) if(${BUILD_WITH_ROCKSDB})
cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
add_definitions(-DUSE_ROCKSDB)
endif()
else()
if (NOT ${TD_LINUX})
if(${BUILD_WITH_ROCKSDB})
cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
add_definitions(-DUSE_ROCKSDB) add_definitions(-DUSE_ROCKSDB)
endif(${BUILD_WITH_ROCKSDB}) endif(${BUILD_WITH_ROCKSDB})
else()
if(${BUILD_WITH_ROCKSDB})
add_definitions(-DUSE_ROCKSDB)
endif(${BUILD_WITH_ROCKSDB})
endif()
endif()
# canonical-raft # canonical-raft
if(${BUILD_WITH_CRAFT}) if(${BUILD_WITH_CRAFT})
...@@ -134,6 +146,11 @@ if(${BUILD_ADDR2LINE}) ...@@ -134,6 +146,11 @@ if(${BUILD_ADDR2LINE})
endif(NOT ${TD_WINDOWS}) endif(NOT ${TD_WINDOWS})
endif(${BUILD_ADDR2LINE}) endif(${BUILD_ADDR2LINE})
# geos
if(${BUILD_GEOS})
cat("${TD_SUPPORT_DIR}/geos_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
# download dependencies # download dependencies
configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt") configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" . execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
...@@ -222,19 +239,113 @@ endif(${BUILD_WITH_LEVELDB}) ...@@ -222,19 +239,113 @@ endif(${BUILD_WITH_LEVELDB})
# rocksdb # rocksdb
# To support rocksdb build on ubuntu: sudo apt-get install libgflags-dev # To support rocksdb build on ubuntu: sudo apt-get install libgflags-dev
if(${BUILD_WITH_ROCKSDB}) if (${BUILD_WITH_UV})
if(${TD_LINUX})
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS_REL}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL}")
IF ("${CMAKE_BUILD_TYPE}" STREQUAL "")
SET(CMAKE_BUILD_TYPE Release)
endif()
endif(${TD_LINUX})
endif (${BUILD_WITH_UV})
if (${BUILD_WITH_ROCKSDB})
if (${BUILD_CONTRIB})
if(${TD_LINUX})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL} -Wno-error=maybe-uninitialized -Wno-error=unused-but-set-variable -Wno-error=unused-variable -Wno-error=unused-function -Wno-errno=unused-private-field -Wno-error=unused-result")
if ("${CMAKE_BUILD_TYPE}" STREQUAL "")
SET(CMAKE_BUILD_TYPE Release)
endif()
endif(${TD_LINUX})
MESSAGE(STATUS "CXXXX STATUS CONFIG: " ${CMAKE_CXX_FLAGS})
if(${TD_DARWIN})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized") SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized")
endif(${TD_DARWIN})
if (${TD_DARWIN_ARM64})
set(HAS_ARMV8_CRC true)
endif(${TD_DARWIN_ARM64})
if (${TD_WINDOWS})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4244 /wd4819")
option(WITH_JNI "" OFF)
option(WITH_MD_LIBRARY "build with MD" OFF)
set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib)
endif(${TD_WINDOWS})
if(${TD_DARWIN})
option(HAVE_THREAD_LOCAL "" OFF)
option(WITH_IOSTATS_CONTEXT "" OFF)
option(WITH_PERF_CONTEXT "" OFF)
endif(${TD_DARWIN})
option(WITH_FALLOCATE "" OFF)
option(WITH_JEMALLOC "" OFF)
option(WITH_GFLAGS "" OFF)
option(PORTABLE "" ON)
option(WITH_LIBURING "" OFF)
option(FAIL_ON_WARNINGS OFF)
option(WITH_TESTS "" OFF) option(WITH_TESTS "" OFF)
option(WITH_BENCHMARK_TOOLS "" OFF) option(WITH_BENCHMARK_TOOLS "" OFF)
option(WITH_TOOLS "" OFF) option(WITH_TOOLS "" OFF)
option(WITH_LIBURING "" OFF) option(WITH_LIBURING "" OFF)
option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF) option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF)
add_subdirectory(rocksdb EXCLUDE_FROM_ALL) add_subdirectory(rocksdb EXCLUDE_FROM_ALL)
target_include_directories( target_include_directories(
rocksdb rocksdb
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include> PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include>
) )
endif(${BUILD_WITH_ROCKSDB}) else()
if (NOT ${TD_LINUX})
MESSAGE(STATUS "CXXXX STATUS CONFIG: " ${CMAKE_CXX_FLAGS})
if(${TD_DARWIN})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized")
endif(${TD_DARWIN})
if (${TD_DARWIN_ARM64})
set(HAS_ARMV8_CRC true)
endif(${TD_DARWIN_ARM64})
if (${TD_WINDOWS})
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4244 /wd4819")
option(WITH_JNI "" OFF)
option(WITH_MD_LIBRARY "build with MD" OFF)
set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib)
endif(${TD_WINDOWS})
if(${TD_DARWIN})
option(HAVE_THREAD_LOCAL "" OFF)
option(WITH_IOSTATS_CONTEXT "" OFF)
option(WITH_PERF_CONTEXT "" OFF)
endif(${TD_DARWIN})
option(WITH_FALLOCATE "" OFF)
option(WITH_JEMALLOC "" OFF)
option(WITH_GFLAGS "" OFF)
option(PORTABLE "" ON)
option(WITH_LIBURING "" OFF)
option(FAIL_ON_WARNINGS OFF)
option(WITH_TESTS "" OFF)
option(WITH_BENCHMARK_TOOLS "" OFF)
option(WITH_TOOLS "" OFF)
option(WITH_LIBURING "" OFF)
option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF)
add_subdirectory(rocksdb EXCLUDE_FROM_ALL)
target_include_directories(
rocksdb
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include>
)
endif()
endif()
endif()
# lucene # lucene
# To support build on ubuntu: sudo apt-get install libboost-all-dev # To support build on ubuntu: sudo apt-get install libboost-all-dev
...@@ -434,6 +545,23 @@ if(${BUILD_ADDR2LINE}) ...@@ -434,6 +545,23 @@ if(${BUILD_ADDR2LINE})
endif(NOT ${TD_WINDOWS}) endif(NOT ${TD_WINDOWS})
endif(${BUILD_ADDR2LINE}) endif(${BUILD_ADDR2LINE})
# geos
if(${BUILD_GEOS})
if(${TD_LINUX})
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS_REL}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL}")
IF ("${CMAKE_BUILD_TYPE}" STREQUAL "")
SET(CMAKE_BUILD_TYPE Release)
endif()
endif(${TD_LINUX})
option(BUILD_SHARED_LIBS "Build GEOS with shared libraries" OFF)
add_subdirectory(geos EXCLUDE_FROM_ALL)
unset(CMAKE_CXX_STANDARD CACHE) # undo libgeos's setting of global CMAKE_CXX_STANDARD
target_include_directories(
geos_c
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/geos/include>
)
endif(${BUILD_GEOS})
# ================================================================================================ # ================================================================================================
# Build test # Build test
......
message("contrib test/rocksdb:" ${BUILD_DEPENDENCY_TESTS})
add_executable(rocksdbTest "") add_executable(rocksdbTest "")
target_sources(rocksdbTest target_sources(rocksdbTest
PRIVATE PRIVATE
......
#include <assert.h> #include <assert.h>
#include <bits/stdint-uintn.h>
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
...@@ -9,38 +10,307 @@ ...@@ -9,38 +10,307 @@
const char DBPath[] = "rocksdb_c_simple_example"; const char DBPath[] = "rocksdb_c_simple_example";
const char DBBackupPath[] = "/tmp/rocksdb_c_simple_example_backup"; const char DBBackupPath[] = "/tmp/rocksdb_c_simple_example_backup";
static const int32_t endian_test_var = 1;
#define IS_LITTLE_ENDIAN() (*(uint8_t *)(&endian_test_var) != 0)
#define TD_RT_ENDIAN() (IS_LITTLE_ENDIAN() ? TD_LITTLE_ENDIAN : TD_BIG_ENDIAN)
#define POINTER_SHIFT(p, b) ((void *)((char *)(p) + (b)))
static void *taosDecodeFixedU64(const void *buf, uint64_t *value) {
if (IS_LITTLE_ENDIAN()) {
memcpy(value, buf, sizeof(*value));
} else {
((uint8_t *)value)[7] = ((uint8_t *)buf)[0];
((uint8_t *)value)[6] = ((uint8_t *)buf)[1];
((uint8_t *)value)[5] = ((uint8_t *)buf)[2];
((uint8_t *)value)[4] = ((uint8_t *)buf)[3];
((uint8_t *)value)[3] = ((uint8_t *)buf)[4];
((uint8_t *)value)[2] = ((uint8_t *)buf)[5];
((uint8_t *)value)[1] = ((uint8_t *)buf)[6];
((uint8_t *)value)[0] = ((uint8_t *)buf)[7];
}
return POINTER_SHIFT(buf, sizeof(*value));
}
// ---- Fixed U64
static int32_t taosEncodeFixedU64(void **buf, uint64_t value) {
if (buf != NULL) {
if (IS_LITTLE_ENDIAN()) {
memcpy(*buf, &value, sizeof(value));
} else {
((uint8_t *)(*buf))[0] = value & 0xff;
((uint8_t *)(*buf))[1] = (value >> 8) & 0xff;
((uint8_t *)(*buf))[2] = (value >> 16) & 0xff;
((uint8_t *)(*buf))[3] = (value >> 24) & 0xff;
((uint8_t *)(*buf))[4] = (value >> 32) & 0xff;
((uint8_t *)(*buf))[5] = (value >> 40) & 0xff;
((uint8_t *)(*buf))[6] = (value >> 48) & 0xff;
((uint8_t *)(*buf))[7] = (value >> 56) & 0xff;
}
*buf = POINTER_SHIFT(*buf, sizeof(value));
}
return (int32_t)sizeof(value);
}
typedef struct KV {
uint64_t k1;
uint64_t k2;
} KV;
int kvSerial(KV *kv, char *buf) {
int len = 0;
len += taosEncodeFixedU64((void **)&buf, kv->k1);
len += taosEncodeFixedU64((void **)&buf, kv->k2);
return len;
}
const char *kvDBName(void *name) { return "kvDBname"; }
int kvDBComp(void *state, const char *aBuf, size_t aLen, const char *bBuf, size_t bLen) {
KV w1, w2;
memset(&w1, 0, sizeof(w1));
memset(&w2, 0, sizeof(w2));
char *p1 = (char *)aBuf;
char *p2 = (char *)bBuf;
// p1 += 1;
// p2 += 1;
p1 = taosDecodeFixedU64(p1, &w1.k1);
p2 = taosDecodeFixedU64(p2, &w2.k1);
p1 = taosDecodeFixedU64(p1, &w1.k2);
p2 = taosDecodeFixedU64(p2, &w2.k2);
if (w1.k1 < w2.k1) {
return -1;
} else if (w1.k1 > w2.k1) {
return 1;
}
if (w1.k2 < w2.k2) {
return -1;
} else if (w1.k2 > w2.k2) {
return 1;
}
return 0;
}
int kvDeserial(KV *kv, char *buf) {
char *p1 = (char *)buf;
// p1 += 1;
p1 = taosDecodeFixedU64(p1, &kv->k1);
p1 = taosDecodeFixedU64(p1, &kv->k2);
return 0;
}
int main(int argc, char const *argv[]) { int main(int argc, char const *argv[]) {
rocksdb_t * db; rocksdb_t *db;
rocksdb_backup_engine_t *be; rocksdb_backup_engine_t *be;
rocksdb_options_t * options = rocksdb_options_create();
rocksdb_options_set_create_if_missing(options, 1);
// open DB
char *err = NULL; char *err = NULL;
db = rocksdb_open(options, DBPath, &err); const char *path = "/tmp/db";
// Write rocksdb_options_t *opt = rocksdb_options_create();
rocksdb_writeoptions_t *writeoptions = rocksdb_writeoptions_create(); rocksdb_options_set_create_if_missing(opt, 1);
rocksdb_put(db, writeoptions, "key", 3, "value", 5, &err); rocksdb_options_set_create_missing_column_families(opt, 1);
// Read // Read
rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create(); rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create();
rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db)); // rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db));
int len = 1;
char buf[256] = {0};
size_t vallen = 0; size_t vallen = 0;
char * val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err); char *val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
printf("val:%s\n", val); snprintf(buf, vallen + 5, "val:%s", val);
printf("%ld %ld %s\n", strlen(val), vallen, buf);
char **cfName = calloc(len, sizeof(char *));
for (int i = 0; i < len; i++) {
cfName[i] = "test";
}
const rocksdb_options_t **cfOpt = malloc(len * sizeof(rocksdb_options_t *));
for (int i = 0; i < len; i++) {
cfOpt[i] = rocksdb_options_create_copy(opt);
if (i != 0) {
rocksdb_comparator_t *comp = rocksdb_comparator_create(NULL, NULL, kvDBComp, kvDBName);
rocksdb_options_set_comparator((rocksdb_options_t *)cfOpt[i], comp);
}
}
rocksdb_column_family_handle_t **cfHandle = malloc(len * sizeof(rocksdb_column_family_handle_t *));
db = rocksdb_open_column_families(opt, path, len, (const char *const *)cfName, cfOpt, cfHandle, &err);
// Update {
// rocksdb_put(db, writeoptions, "key", 3, "eulav", 5, &err); rocksdb_readoptions_t *rOpt = rocksdb_readoptions_create();
size_t vlen = 0;
// Delete char *v = rocksdb_get_cf(db, rOpt, cfHandle[0], "key", strlen("key"), &vlen, &err);
rocksdb_delete(db, writeoptions, "key", 3, &err); printf("Get value %s, and len = %d\n", v, (int)vlen);
}
// Read again rocksdb_writeoptions_t *wOpt = rocksdb_writeoptions_create();
val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err); rocksdb_writebatch_t *wBatch = rocksdb_writebatch_create();
printf("val:%s\n", val); rocksdb_writebatch_put_cf(wBatch, cfHandle[0], "key", strlen("key"), "value", strlen("value"));
rocksdb_write(db, wOpt, wBatch, &err);
rocksdb_readoptions_t *rOpt = rocksdb_readoptions_create();
size_t vlen = 0;
{
rocksdb_writeoptions_t *wOpt = rocksdb_writeoptions_create();
rocksdb_writebatch_t *wBatch = rocksdb_writebatch_create();
for (int i = 0; i < 100; i++) {
char buf[128] = {0};
KV kv = {.k1 = (100 - i) % 26, .k2 = i % 26};
kvSerial(&kv, buf);
rocksdb_writebatch_put_cf(wBatch, cfHandle[1], buf, sizeof(kv), "value", strlen("value"));
}
rocksdb_write(db, wOpt, wBatch, &err);
}
{
{
char buf[128] = {0};
KV kv = {.k1 = 0, .k2 = 0};
kvSerial(&kv, buf);
char *v = rocksdb_get_cf(db, rOpt, cfHandle[1], buf, sizeof(kv), &vlen, &err);
printf("Get value %s, and len = %d, xxxx\n", v, (int)vlen);
rocksdb_iterator_t *iter = rocksdb_create_iterator_cf(db, rOpt, cfHandle[1]);
rocksdb_iter_seek_to_first(iter);
int i = 0;
while (rocksdb_iter_valid(iter)) {
size_t klen, vlen;
const char *key = rocksdb_iter_key(iter, &klen);
const char *value = rocksdb_iter_value(iter, &vlen);
KV kv;
kvDeserial(&kv, (char *)key);
printf("kv1: %d\t kv2: %d, len:%d, value = %s\n", (int)(kv.k1), (int)(kv.k2), (int)(klen), value);
i++;
rocksdb_iter_next(iter);
}
rocksdb_iter_destroy(iter);
}
{
char buf[128] = {0};
KV kv = {.k1 = 0, .k2 = 0};
int len = kvSerial(&kv, buf);
rocksdb_iterator_t *iter = rocksdb_create_iterator_cf(db, rOpt, cfHandle[1]);
rocksdb_iter_seek(iter, buf, len);
if (!rocksdb_iter_valid(iter)) {
printf("invalid iter");
}
{
char buf[128] = {0};
KV kv = {.k1 = 100, .k2 = 0};
int len = kvSerial(&kv, buf);
rocksdb_iterator_t *iter = rocksdb_create_iterator_cf(db, rOpt, cfHandle[1]);
rocksdb_iter_seek(iter, buf, len);
if (!rocksdb_iter_valid(iter)) {
printf("invalid iter\n");
rocksdb_iter_seek_for_prev(iter, buf, len);
if (!rocksdb_iter_valid(iter)) {
printf("stay invalid iter\n");
} else {
size_t klen = 0, vlen = 0;
const char *key = rocksdb_iter_key(iter, &klen);
const char *value = rocksdb_iter_value(iter, &vlen);
KV kv;
kvDeserial(&kv, (char *)key);
printf("kv1: %d\t kv2: %d, len:%d, value = %s\n", (int)(kv.k1), (int)(kv.k2), (int)(klen), value);
}
}
}
}
}
// char *v = rocksdb_get_cf(db, rOpt, cfHandle[0], "key", strlen("key"), &vlen, &err);
// printf("Get value %s, and len = %d\n", v, (int)vlen);
rocksdb_column_family_handle_destroy(cfHandle[0]);
rocksdb_column_family_handle_destroy(cfHandle[1]);
rocksdb_close(db); rocksdb_close(db);
// {
// // rocksdb_options_t *Options = rocksdb_options_create();
// db = rocksdb_open(comm, path, &err);
// if (db != NULL) {
// rocksdb_options_t *cfo = rocksdb_options_create_copy(comm);
// rocksdb_comparator_t *cmp1 = rocksdb_comparator_create(NULL, NULL, kvDBComp, kvDBName);
// rocksdb_options_set_comparator(cfo, cmp1);
// rocksdb_column_family_handle_t *handle = rocksdb_create_column_family(db, cfo, "cf1", &err);
// rocksdb_column_family_handle_destroy(handle);
// rocksdb_close(db);
// db = NULL;
// }
// }
// int ncf = 2;
// rocksdb_column_family_handle_t **pHandle = malloc(ncf * sizeof(rocksdb_column_family_handle_t *));
// {
// rocksdb_options_t *options = rocksdb_options_create_copy(comm);
// rocksdb_comparator_t *cmp1 = rocksdb_comparator_create(NULL, NULL, kvDBComp, kvDBName);
// rocksdb_options_t *dbOpts1 = rocksdb_options_create_copy(comm);
// rocksdb_options_t *dbOpts2 = rocksdb_options_create_copy(comm);
// rocksdb_options_set_comparator(dbOpts2, cmp1);
// // rocksdb_column_family_handle_t *cf = rocksdb_create_column_family(db, dbOpts1, "cmp1", &err);
// const char *pName[] = {"default", "cf1"};
// const rocksdb_options_t **pOpts = malloc(ncf * sizeof(rocksdb_options_t *));
// pOpts[0] = dbOpts1;
// pOpts[1] = dbOpts2;
// rocksdb_options_t *allOptions = rocksdb_options_create_copy(comm);
// db = rocksdb_open_column_families(allOptions, "test", ncf, pName, pOpts, pHandle, &err);
// }
// // rocksdb_options_t *options = rocksdb_options_create();
// // rocksdb_options_set_create_if_missing(options, 1);
// // //rocksdb_open_column_families(const rocksdb_options_t *options, const char *name, int num_column_families,
// // const char *const *column_family_names,
// // const rocksdb_options_t *const *column_family_options,
// // rocksdb_column_family_handle_t **column_family_handles, char **errptr);
// for (int i = 0; i < 100; i++) {
// char buf[128] = {0};
// rocksdb_writeoptions_t *wopt = rocksdb_writeoptions_create();
// KV kv = {.k1 = i, .k2 = i};
// kvSerial(&kv, buf);
// rocksdb_put_cf(db, wopt, pHandle[0], buf, strlen(buf), (const char *)&i, sizeof(i), &err);
// }
// rocksdb_close(db);
// Write
// rocksdb_writeoptions_t *writeoptions = rocksdb_writeoptions_create();
// rocksdb_put(db, writeoptions, "key", 3, "value", 5, &err);
//// Read
// rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create();
// rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db));
// size_t vallen = 0;
// char *val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
// printf("val:%s\n", val);
//// Update
//// rocksdb_put(db, writeoptions, "key", 3, "eulav", 5, &err);
//// Delete
// rocksdb_delete(db, writeoptions, "key", 3, &err);
//// Read again
// val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
// printf("val:%s\n", val);
// rocksdb_close(db);
return 0; return 0;
} }
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -4,7 +4,7 @@ if(${BUILD_DOCS}) ...@@ -4,7 +4,7 @@ if(${BUILD_DOCS})
find_package(Doxygen) find_package(Doxygen)
if (DOXYGEN_FOUND) if (DOXYGEN_FOUND)
# Build the doc # Build the doc
set(DOXYGEN_IN ${TD_SOURCE_DIR}/docs/Doxyfile.in) set(DOXYGEN_IN ${TD_SOURCE_DIR}/docs/doxgen/Doxyfile.in)
set(DOXYGEN_OUT ${CMAKE_BINARY_DIR}/Doxyfile) set(DOXYGEN_OUT ${CMAKE_BINARY_DIR}/Doxyfile)
configure_file(${DOXYGEN_IN} ${DOXYGEN_OUT} @ONLY) configure_file(${DOXYGEN_IN} ${DOXYGEN_OUT} @ONLY)
......
...@@ -5,7 +5,7 @@ description: This website contains the user manuals for TDengine, an open-source ...@@ -5,7 +5,7 @@ description: This website contains the user manuals for TDengine, an open-source
slug: / slug: /
--- ---
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. Its written mainly for architects, developers, and system administrators. TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It's written mainly for architects, developers, and system administrators.
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section. To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
......
...@@ -44,7 +44,7 @@ For more details on features, please read through the entire documentation. ...@@ -44,7 +44,7 @@ For more details on features, please read through the entire documentation.
## Competitive Advantages ## Competitive Advantages
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages. By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb/), with the following advantages.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
...@@ -57,7 +57,7 @@ By making full use of [characteristics of time series data](https://tdengine.com ...@@ -57,7 +57,7 @@ By making full use of [characteristics of time series data](https://tdengine.com
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. - **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide. - **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine's core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced.
...@@ -109,8 +109,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine ...@@ -109,8 +109,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | | **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| Very large total processing capacity | | | √ | TDengines cluster functions can easily improve processing capacity via multi-server coordination. | | Very large total processing capacity | | | √ | TDengine's cluster functions can easily improve processing capacity via multi-server coordination. |
| Extremely high-speed data processing | | | √ | TDengines storage and data processing are optimized for IoT, and can process data many times faster than similar products. | | Extremely high-speed data processing | | | √ | TDengine's storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. | | Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
### System Maintenance Requirements ### System Maintenance Requirements
...@@ -123,11 +123,10 @@ As a high-performance, scalable and SQL supported time-series database, TDengine ...@@ -123,11 +123,10 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
## Comparison with other databases ## Comparison with other databases
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/performance-comparison-of-tdengine-and-influxdb/) - [TDengine vs. InfluxDB](https://tdengine.com/tsdb-comparison-influxdb-vs-tdengine/)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/query-performance-comparison-test-report-tdengine-vs-influxdb/) - [TDengine vs. TimescaleDB](https://tdengine.com/tsdb-comparison-timescaledb-vs-tdengine/)
- [TDengine vs OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/) - [TDengine vs. OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
- [TDengine vs Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/) - [TDengine vs. Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
- [TDengine vs InfluxDB](https://tdengine.com/performance-tdengine-vs-influxdb/)
## More readings ## More readings
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/) - [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
......
...@@ -127,7 +127,7 @@ To make full use of time-series data characteristics, TDengine adopts a strategy ...@@ -127,7 +127,7 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.** If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and wont build the index on any metrics stored. Column wise storage is used. TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and won't build the index on any metrics stored. Column wise storage is used.
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP. Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
......
...@@ -6,7 +6,7 @@ description: This document describes how to install TDengine in a Docker contain ...@@ -6,7 +6,7 @@ description: This document describes how to install TDengine in a Docker contain
This document describes how to install TDengine in a Docker container and perform queries and inserts. This document describes how to install TDengine in a Docker container and perform queries and inserts.
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com). - The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
- To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package). - To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). - If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
......
...@@ -10,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3"; ...@@ -10,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3";
This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts. This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com). - The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). - To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). - If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
...@@ -20,6 +20,19 @@ The standard server installation package includes `taos`, `taosd`, `taosAdapter` ...@@ -20,6 +20,19 @@ The standard server installation package includes `taos`, `taosd`, `taosAdapter`
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS. The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
## Operating environment requirements
In the Linux system, the minimum requirements for the operating environment are as follows:
linux core version - 3.10.0-1160.83.1.el7.x86_64;
glibc version - 2.17;
If compiling and installing through clone source code, it is also necessary to meet the following requirements:
cmake version - 3.26.4 or above;
gcc version - 9.3.1 or above;
## Installation ## Installation
<Tabs> <Tabs>
...@@ -102,7 +115,7 @@ sudo apt-get install tdengine ...@@ -102,7 +115,7 @@ sudo apt-get install tdengine
:::tip :::tip
This installation method is supported only for Debian and Ubuntu. This installation method is supported only for Debian and Ubuntu.
:::: :::
</TabItem> </TabItem>
<TabItem label="Windows" value="windows"> <TabItem label="Windows" value="windows">
...@@ -208,6 +221,8 @@ The following `launchctl` commands can help you manage TDengine service: ...@@ -208,6 +221,8 @@ The following `launchctl` commands can help you manage TDengine service:
- Check TDengine Server status: `sudo launchctl list | grep taosd` - Check TDengine Server status: `sudo launchctl list | grep taosd`
- Check TDengine Server status details: `launchctl print system/com.tdengine.taosd`
:::info :::info
- Please use `sudo` to run `launchctl` to manage _com.tdengine.taosd_ with administrator privileges. - Please use `sudo` to run `launchctl` to manage _com.tdengine.taosd_ with administrator privileges.
- The administrator privilege is required for service management to enhance security. - The administrator privilege is required for service management to enhance security.
......
...@@ -12,4 +12,4 @@ When using REST connection, the feature of bulk pulling can be enabled if the si ...@@ -12,4 +12,4 @@ When using REST connection, the feature of bulk pulling can be enabled if the si
{{#include docs/examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}} {{#include docs/examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
``` ```
More configuration about connectionplease refer to [Java Connector](/reference/connector/java) More configuration about connection, please refer to [Java Connector](/reference/connector/java)
```php title="原生连接" ```php title=""native"
{{#include docs/examples/php/connect.php}} {{#include docs/examples/php/connect.php}}
``` ```
...@@ -33,7 +33,7 @@ There are two ways for a connector to establish connections to TDengine: ...@@ -33,7 +33,7 @@ There are two ways for a connector to establish connections to TDengine:
For REST and native connections, connectors provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users. For REST and native connections, connectors provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
Key differences Key differences:
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade. 3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc. 1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
...@@ -83,7 +83,7 @@ If `maven` is used to manage the projects, what needs to be done is only adding ...@@ -83,7 +83,7 @@ If `maven` is used to manage the projects, what needs to be done is only adding
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version> <version>3.2.1</version>
</dependency> </dependency>
``` ```
...@@ -198,7 +198,7 @@ The sample code below are based on dotnet6.0, they may need to be adjusted if yo ...@@ -198,7 +198,7 @@ The sample code below are based on dotnet6.0, they may need to be adjusted if yo
<TabItem label="R" value="r"> <TabItem label="R" value="r">
1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.0.0/). 1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.0.0/).
2. Install the dependency package `RJDBC` 2. Install the dependency package `RJDBC`:
```R ```R
install.packages("RJDBC") install.packages("RJDBC")
...@@ -213,7 +213,7 @@ If the client driver (taosc) is already installed, then the C connector is alrea ...@@ -213,7 +213,7 @@ If the client driver (taosc) is already installed, then the C connector is alrea
</TabItem> </TabItem>
<TabItem label="PHP" value="php"> <TabItem label="PHP" value="php">
**Download Source Code Package and Unzip** **Download Source Code Package and Unzip: **
```shell ```shell
curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \ curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
...@@ -223,13 +223,13 @@ curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive ...@@ -223,13 +223,13 @@ curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive
> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please check available version from [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases). > Version number `v1.0.2` is only for example, it can be replaced to any newer version, please check available version from [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
**Non-Swoole Environment** **Non-Swoole Environment: **
```shell ```shell
phpize && ./configure && make -j && make install phpize && ./configure && make -j && make install
``` ```
**Specify TDengine Location** **Specify TDengine Location: **
```shell ```shell
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
...@@ -238,7 +238,7 @@ phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && ...@@ -238,7 +238,7 @@ phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 &&
> `--with-tdengine-dir=` is followed by the TDengine installation location. > `--with-tdengine-dir=` is followed by the TDengine installation location.
> This way is useful in case TDengine location can't be found automatically or macOS. > This way is useful in case TDengine location can't be found automatically or macOS.
**Swoole Environment** **Swoole Environment: **
```shell ```shell
phpize && ./configure --enable-swoole && make -j && make install phpize && ./configure --enable-swoole && make -j && make install
...@@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a ...@@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a
</Tabs> </Tabs>
:::tip :::tip
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq). If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq).
::: :::
...@@ -33,7 +33,7 @@ The below SQL statement is used to insert one row into table "d1001". ...@@ -33,7 +33,7 @@ The below SQL statement is used to insert one row into table "d1001".
INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31); INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31);
``` ```
`ts1` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detial, refer to [TDengine SQL insert timestamp section](/taos-sql/insert). `ts1` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](/taos-sql/insert).
### Insert Multiple Rows ### Insert Multiple Rows
...@@ -43,7 +43,7 @@ Multiple rows can be inserted in a single SQL statement. The example below inser ...@@ -43,7 +43,7 @@ Multiple rows can be inserted in a single SQL statement. The example below inser
INSERT INTO d1001 VALUES (ts2, 10.2, 220, 0.23) (ts2, 10.3, 218, 0.25); INSERT INTO d1001 VALUES (ts2, 10.2, 220, 0.23) (ts2, 10.3, 218, 0.25);
``` ```
`ts1` and `ts2` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detial, refer to [TDengine SQL insert timestamp section](/taos-sql/insert). `ts1` and `ts2` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](/taos-sql/insert).
### Insert into Multiple Tables ### Insert into Multiple Tables
...@@ -53,7 +53,7 @@ Data can be inserted into multiple tables in the same SQL statement. The example ...@@ -53,7 +53,7 @@ Data can be inserted into multiple tables in the same SQL statement. The example
INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31) (ts2, 12.6, 218, 0.33) d1002 VALUES (ts3, 12.3, 221, 0.31); INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31) (ts2, 12.6, 218, 0.33) d1002 VALUES (ts3, 12.3, 221, 0.31);
``` ```
`ts1`, `ts2` and `ts3` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detial, refer to [TDengine SQL insert timestamp section](/taos-sql/insert). `ts1`, `ts2` and `ts3` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](/taos-sql/insert).
For more details about `INSERT` please refer to [INSERT](/taos-sql/insert). For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
......
...@@ -69,7 +69,7 @@ For more details please refer to [InfluxDB Line Protocol](https://docs.influxdat ...@@ -69,7 +69,7 @@ For more details please refer to [InfluxDB Line Protocol](https://docs.influxdat
## Query Examples ## Query Examples
If you want query the data of `location=California.LosAngeles,groupid=2`here is the query SQL: If you want query the data of `location=California.LosAngeles,groupid=2`, here is the query SQL:
```sql ```sql
SELECT * FROM meters WHERE location = "California.LosAngeles" AND groupid = 2; SELECT * FROM meters WHERE location = "California.LosAngeles" AND groupid = 2;
......
...@@ -84,7 +84,7 @@ Query OK, 4 row(s) in set (0.005399s) ...@@ -84,7 +84,7 @@ Query OK, 4 row(s) in set (0.005399s)
## Query Examples ## Query Examples
If you want query the data of `location=California.LosAngeles groupid=3`here is the query SQL: If you want query the data of `location=California.LosAngeles groupid=3`, here is the query SQL:
```sql ```sql
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3; SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
......
...@@ -97,7 +97,7 @@ Query OK, 2 row(s) in set (0.004076s) ...@@ -97,7 +97,7 @@ Query OK, 2 row(s) in set (0.004076s)
## Query Examples ## Query Examples
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1}here is the query SQL: If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1}, here is the query SQL:
```sql ```sql
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3; SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
......
...@@ -49,7 +49,7 @@ If the data source is Kafka, then the application program is a consumer of Kafka ...@@ -49,7 +49,7 @@ If the data source is Kafka, then the application program is a consumer of Kafka
On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned. On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned.
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config) For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config).
## Sample Programs ## Sample Programs
...@@ -98,7 +98,7 @@ The main Program is responsible for: ...@@ -98,7 +98,7 @@ The main Program is responsible for:
3. Start reading threads 3. Start reading threads
4. Output writing speed every 10 seconds 4. Output writing speed every 10 seconds
The main program provides 4 parameters for tuning The main program provides 4 parameters for tuning:
1. The number of reading threads, default value is 1 1. The number of reading threads, default value is 1
2. The number of writing threads, default value is 2 2. The number of writing threads, default value is 2
...@@ -192,7 +192,7 @@ TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata" ...@@ -192,7 +192,7 @@ TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
If you want to launch the sample program on a remote server, please follow below steps: If you want to launch the sample program on a remote server, please follow below steps:
1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java` 1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java`:
``` ```
mvn package mvn package
``` ```
...@@ -385,7 +385,7 @@ SQLWriter class encapsulates the logic of composing SQL and writing data. Please ...@@ -385,7 +385,7 @@ SQLWriter class encapsulates the logic of composing SQL and writing data. Please
pip3 install faster-fifo pip3 install faster-fifo
``` ```
3. Click the "Copy" in the above sample programs to copy `fast_write_example.py``sql_writer.py` and `mockdatasource.py`. 3. Click the "Copy" in the above sample programs to copy `fast_write_example.py`, `sql_writer.py`, and `mockdatasource.py`.
4. Execute the program 4. Execute the program
......
### python Kafka 客户端 ### python Kafka client
For python kafka client, please refer to [kafka client](https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python). In this document, we use [kafka-python](http://github.com/dpkp/kafka-python). For python kafka client, please refer to [kafka client](https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python). In this document, we use [kafka-python](http://github.com/dpkp/kafka-python).
...@@ -88,7 +88,7 @@ In addition to python's built-in multithreading and multiprocessing library, we ...@@ -88,7 +88,7 @@ In addition to python's built-in multithreading and multiprocessing library, we
<details> <details>
<summary>kafka_example_consumer</summary> <summary>kafka_example_consumer</summary>
`kafka_example_consumer` is `consumer`which is responsible for consuming data from kafka and writing it to TDengine. `kafka_example_consumer` is `consumer`, which is responsible for consuming data from kafka and writing it to TDengine.
```py ```py
{{#include docs/examples/python/kafka_example_consumer.py}} {{#include docs/examples/python/kafka_example_consumer.py}}
......
```rust
{{#include docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs}}
```
...@@ -20,10 +20,10 @@ import CAsync from "./_c_async.mdx"; ...@@ -20,10 +20,10 @@ import CAsync from "./_c_async.mdx";
## Introduction ## Introduction
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
- Query on single column or multiple columns - Query on single column or multiple columns
- Filter on tags or data columns>, <, =, <\>, like - Filter on tags or data columns: >, <, =, <\>, like
- Grouping of results: `Group By` - Sorting of results: `Order By` - Limit the number of results: `Limit/Offset` - Grouping of results: `Group By` - Sorting of results: `Order By` - Limit the number of results: `Limit/Offset`
- Windowed aggregate queries for time windows (interval), session windows (session), and state windows (state_window) - Windowed aggregate queries for time windows (interval), session windows (session), and state windows (state_window)
- Arithmetic on columns of numeric types or aggregate results - Arithmetic on columns of numeric types or aggregate results
...@@ -160,7 +160,7 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database ...@@ -160,7 +160,7 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
:::note :::note
1. With either REST connection or native connection, the above sample code works well. 1. With either REST connection or native connection, the above sample code works well.
2. Please note that `use db` can't be used in case of REST connection because it's stateless. 2. Please note that `use db` can't be used in case of REST connection because it's stateless. You can specify the database name by either the REST endpoint's parameter or <db_name>.<table_name> in the SQL command.
::: :::
......
...@@ -23,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i ...@@ -23,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers. To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
Tips:The default data subscription is to consume data from the wal. If the wal is deleted, the consumed data will be incomplete. At this time, you can set the parameter experimental.snapshot.enable to true to obtain all data from the tsdb, but in this way, the consumption order of the data cannot be guaranteed. Therefore, it is recommended to set a reasonable retention policy for WAL based on your consumption situation to ensure that you can subscribe all data from WAL. Tips: Data subscription is to consume data from the wal. If some wal files are deleted according to WAL retention policy, the deleted data can't be consumed any more. So you need to set a reasonable value for parameter `WAL_RETENTION_PERIOD` or `WAL_RETENTION_SIZE` when creating the database and make sure your application consume the data in a timely way to make sure there is no data loss. This behavior is similar to Kafka and other widely used message queue products.
## Data Schema and API ## Data Schema and API
...@@ -105,6 +105,12 @@ class Consumer: ...@@ -105,6 +105,12 @@ class Consumer:
def poll(self, timeout: float = 1.0): def poll(self, timeout: float = 1.0):
pass pass
def assignment(self):
pass
def poll(self, timeout: float = 1.0):
pass
def close(self): def close(self):
pass pass
...@@ -222,7 +228,7 @@ A database including one supertable and two subtables is created as follows: ...@@ -222,7 +228,7 @@ A database including one supertable and two subtables is created as follows:
```sql ```sql
DROP DATABASE IF EXISTS tmqdb; DROP DATABASE IF EXISTS tmqdb;
CREATE DATABASE tmqdb; CREATE DATABASE tmqdb WAL_RETENTION_PERIOD 3600;
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16)); CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16));
CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0"); CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0");
CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1"); CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1");
...@@ -238,6 +244,8 @@ The following SQL statement creates a topic in TDengine: ...@@ -238,6 +244,8 @@ The following SQL statement creates a topic in TDengine:
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1; CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
``` ```
- There is an upper limit to the number of topics created, controlled by the parameter tmqMaxTopicNum, with a default of 20
Multiple subscription types are supported. Multiple subscription types are supported.
#### Subscribe to a Column #### Subscribe to a Column
...@@ -259,14 +267,15 @@ You can subscribe to a topic through a SELECT statement. Statements that specify ...@@ -259,14 +267,15 @@ You can subscribe to a topic through a SELECT statement. Statements that specify
Syntax: Syntax:
```sql ```sql
CREATE TOPIC topic_name AS STABLE stb_name CREATE TOPIC topic_name [with meta] AS STABLE stb_name [where_condition]
``` ```
Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows: Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
- The table schema can be modified. - The table schema can be modified.
- Unstructured data is returned. The format of the data returned changes based on the supertable schema. - Unstructured data is returned. The format of the data returned changes based on the supertable schema.
- A different table schema may exist for every data block to be processed. - The 'with meta' parameter is optional. When selected, statements such as creating super tables and sub tables will be returned, mainly used for Taosx to perform super table migration
- The 'where_condition' parameter is optional and will be used to filter and subscribe to sub tables that meet the criteria. Where conditions cannot have ordinary columns, only tags or tbnames. Functions can be used in where conditions to filter tags, but cannot be aggregate functions because sub table tag values cannot be aggregated. It can also be a constant expression, such as 2>1 (subscribing to all child tables), Or false (subscribe to 0 sub tables)
- The data returned does not include tags. - The data returned does not include tags.
### Subscribe to a Database ### Subscribe to a Database
...@@ -274,10 +283,12 @@ Creating a topic in this manner differs from a `SELECT * from stbName` statement ...@@ -274,10 +283,12 @@ Creating a topic in this manner differs from a `SELECT * from stbName` statement
Syntax: Syntax:
```sql ```sql
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name; CREATE TOPIC topic_name [with meta] AS DATABASE db_name;
``` ```
This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka. This SQL statement creates a subscription to all tables in the database.
- The 'with meta' parameter is optional. When selected, it will return statements for creating all super tables and sub tables in the database, mainly used for Taosx database migration
## Create a Consumer ## Create a Consumer
...@@ -285,16 +296,15 @@ You configure the following parameters when creating a consumer: ...@@ -285,16 +296,15 @@ You configure the following parameters when creating a consumer:
| Parameter | Type | Description | Remarks | | Parameter | Type | Description | Remarks |
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- | | :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection | | `td.connect.ip` | string | IP address of the server side | |
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection | | `td.connect.user` | string | User Name | |
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection | | `td.connect.pass` | string | Password | |
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection | | `td.connect.port` | string | Port of the server side | |
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. | | `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. |
| `client.id` | string | Client ID | Maximum length: 192. | | `client.id` | string | Client ID | Maximum length: 192. |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) | | `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true | | `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds | | `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `experimental.snapshot.enable` | boolean | Specify whether to consume data in TSDB; true: both data in WAL and in TSDB can be consumed; false: only data in WAL can be consumed | default value: false |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false | `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
The method of specifying these parameters depends on the language used: The method of specifying these parameters depends on the language used:
...@@ -312,7 +322,6 @@ tmq_conf_set(conf, "group.id", "cgrpName"); ...@@ -312,7 +322,6 @@ tmq_conf_set(conf, "group.id", "cgrpName");
tmq_conf_set(conf, "td.connect.user", "root"); tmq_conf_set(conf, "td.connect.user", "root");
tmq_conf_set(conf, "td.connect.pass", "taosdata"); tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "auto.offset.reset", "earliest"); tmq_conf_set(conf, "auto.offset.reset", "earliest");
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
tmq_conf_set(conf, "msg.with.table.name", "true"); tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL); tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
...@@ -327,6 +336,7 @@ Java programs use the following parameters: ...@@ -327,6 +336,7 @@ Java programs use the following parameters:
| Parameter | Type | Description | Remarks | | Parameter | Type | Description | Remarks |
| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- | | ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
| `td.connect.type` | string | connection type: "jni" means native connection, "ws" means websocket connection, the default is "jni" |
| `bootstrap.servers` | string |Connection address, such as `localhost:6030` | | `bootstrap.servers` | string |Connection address, such as `localhost:6030` |
| `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type | | `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type |
| `value.deserializer.encoding` | string | Specify the encoding for string deserialization | | | `value.deserializer.encoding` | string | Specify the encoding for string deserialization | |
...@@ -368,7 +378,6 @@ conf := &tmq.ConfigMap{ ...@@ -368,7 +378,6 @@ conf := &tmq.ConfigMap{
"td.connect.port": "6030", "td.connect.port": "6030",
"client.id": "test_tmq_c", "client.id": "test_tmq_c",
"enable.auto.commit": "false", "enable.auto.commit": "false",
"experimental.snapshot.enable": "true",
"msg.with.table.name": "true", "msg.with.table.name": "true",
} }
consumer, err := NewConsumer(conf) consumer, err := NewConsumer(conf)
...@@ -402,23 +411,6 @@ from taos.tmq import Consumer ...@@ -402,23 +411,6 @@ from taos.tmq import Consumer
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"}) consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
``` ```
Python programs use the following parameters:
| Parameter | Type | Description | Remarks |
|:---------:|:----:|:-----------:|:-------:|
| `td.connect.ip` | string | Used in establishing a connection||
| `td.connect.user` | string | Used in establishing a connection||
| `td.connect.pass` | string | Used in establishing a connection||
| `td.connect.port` | string | Used in establishing a connection||
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192 |
| `client.id` | string | Client ID | Maximum length: 192 |
| `msg.with.table.name` | string | Specify whether to deserialize table names from messages | pecify `true` or `false` |
| `enable.auto.commit` | string | Commit automatically | pecify `true` or `false` |
| `auto.commit.interval.ms` | string | Interval for automatic commits, in milliseconds | |
| `auto.offset.reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `experimental.snapshot.enable` | string | Specify whether it's allowed to consume messages from the WAL or from TSDB | Specify `true` or `false` |
| `enable.heartbeat.background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false` |
</TabItem> </TabItem>
<TabItem label="Node.JS" value="Node.JS"> <TabItem label="Node.JS" value="Node.JS">
......
...@@ -10,10 +10,10 @@ TDengine uses various kinds of caching techniques to efficiently write and query ...@@ -10,10 +10,10 @@ TDengine uses various kinds of caching techniques to efficiently write and query
TDengine uses an insert-driven cache management policy, known as first in, first out (FIFO). This policy differs from read-driven "least recently used (LRU)" cache management. A FIFO policy stores the latest data in cache and flushes the oldest data from cache to disk when the cache usage reaches a threshold. In IoT use cases, the most recent data or the current state is most important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data. TDengine uses an insert-driven cache management policy, known as first in, first out (FIFO). This policy differs from read-driven "least recently used (LRU)" cache management. A FIFO policy stores the latest data in cache and flushes the oldest data from cache to disk when the cache usage reaches a threshold. In IoT use cases, the most recent data or the current state is most important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
When you create a database, you can configure the size of the write cache on each vnode. The **vgroups** parameter determines the number of vgroups that process data in the database, and the **buffer** parameter determines the size of the write cache for each vnode. When you create a database, you can configure the size of the write cache on each vnode. The **vgroups** parameter determines the number of vgroups that process data in the database, and the **buffer** parameter determines the size of the write cache for each vnode. The unit of buffer is MB.
```sql ```sql
create database db0 vgroups 100 buffer 16MB create database db0 vgroups 100 buffer 16
``` ```
In theory, larger cache sizes are always better. However, at a certain point, it becomes impossible to improve performance by increasing cache size. In most scenarios, you can retain the default cache settings. In theory, larger cache sizes are always better. However, at a certain point, it becomes impossible to improve performance by increasing cache size. In most scenarios, you can retain the default cache settings.
...@@ -28,10 +28,10 @@ When you create a database, you can configure whether the latest data from every ...@@ -28,10 +28,10 @@ When you create a database, you can configure whether the latest data from every
## Metadata Cache ## Metadata Cache
To improve query and write performance, each vnode caches the metadata that it receives. When you create a database, you can configure the size of the metadata cache through the *pages* and *pagesize* parameters. To improve query and write performance, each vnode caches the metadata that it receives. When you create a database, you can configure the size of the metadata cache through the *pages* and *pagesize* parameters. The unit of pagesize is kb.
```sql ```sql
create database db0 pages 128 pagesize 16kb create database db0 pages 128 pagesize 16
``` ```
The preceding SQL statement creates 128 pages on each vnode in the `db0` database. Each page has a 16 KB metadata cache. The preceding SQL statement creates 128 pages on each vnode in the `db0` database. Each page has a 16 KB metadata cache.
......
此差异已折叠。
...@@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data ...@@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`. - The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`.
- Internal function `NOW` can be used to get the current timestamp on the client side. - Internal function `NOW` can be used to get the current timestamp on the client side.
- The current timestamp of the client side is applied when `NOW` is used to insert data. - The current timestamp of the client side is applied when `NOW` is used to insert data.
- Epoch Timetimestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00. - Epoch Time: timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
- Add/subtract operations can be carried out on timestamps. For example `NOW-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `SELECT * FROM t1 WHERE ts > NOW-2w AND ts <= NOW-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations. - Add/subtract operations can be carried out on timestamps. For example `NOW-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `SELECT * FROM t1 WHERE ts > NOW-2w AND ts <= NOW-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds. Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
...@@ -25,7 +25,7 @@ CREATE DATABASE db_name PRECISION 'ns'; ...@@ -25,7 +25,7 @@ CREATE DATABASE db_name PRECISION 'ns';
In TDengine, the data types below can be used when specifying a column or tag. In TDengine, the data types below can be used when specifying a column or tag.
| # | **type** | **Bytes** | **Description** | | # | **type** | **Bytes** | **Description** |
| --- | :--------------: | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | --- | :---------------: | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported. | | 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported. |
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1]. | | 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1]. |
| 3 | INT UNSIGNED | 4 | Unsigned integer, the value range is [0, 2^32-1]. | | 3 | INT UNSIGNED | 4 | Unsigned integer, the value range is [0, 2^32-1]. |
...@@ -35,18 +35,17 @@ In TDengine, the data types below can be used when specifying a column or tag. ...@@ -35,18 +35,17 @@ In TDengine, the data types below can be used when specifying a column or tag.
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308]. | | 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308]. |
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. | | 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. |
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767]. | | 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767]. |
| 10 | INT UNSIGNED | 2 | unsigned integer, the value range is [0, 65535]. | | 10 | SMALLINT UNSIGNED | 2 | unsigned integer, the value range is [0, 65535]. |
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127]. | | 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127]. |
| 12 | TINYINT UNSIGNED | 1 | unsigned single-byte integer, the value range is [0, 255]. | | 12 | TINYINT UNSIGNED | 1 | unsigned single-byte integer, the value range is [0, 255]. |
| 13 | BOOL | 1 | Bool, the value range is {true, false}. | | 13 | BOOL | 1 | Bool, the value range is {true, false}. |
| 14 | NCHAR | User Defined | Multi-byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\'`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. | | 14 | NCHAR | User Defined | Multi-byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\'`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type. | | 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type. |
| 16 | VARCHAR | User-defined | Alias of BINARY | | 16 | VARCHAR | User-defined | Alias of BINARY |
:::note :::note
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type. - Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
- The length of BINARY can be up to 16,374 bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'` - The length of BINARY can be up to 16,374(data column is 65,517 and tag column is 16,382 since version 3.0.5.0) bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number. - Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
::: :::
......
...@@ -72,8 +72,8 @@ database_option: { ...@@ -72,8 +72,8 @@ database_option: {
- 0: The database can contain multiple supertables. - 0: The database can contain multiple supertables.
- 1: The database can contain only one supertable. - 1: The database can contain only one supertable.
- STT_TRIGGER: specifies the number of file merges triggered by flushed files. The default is 8, ranging from 1 to 16. For high-frequency scenarios with few tables, it is recommended to use the default configuration or a smaller value for this parameter; For multi-table low-frequency scenarios, it is recommended to configure this parameter with a larger value. - STT_TRIGGER: specifies the number of file merges triggered by flushed files. The default is 8, ranging from 1 to 16. For high-frequency scenarios with few tables, it is recommended to use the default configuration or a smaller value for this parameter; For multi-table low-frequency scenarios, it is recommended to configure this parameter with a larger value.
- TABLE_PREFIX:The prefix length in the table name that is ignored when distributing table to vnode based on table name. - TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
- TABLE_SUFFIX:The suffix length in the table name that is ignored when distributing table to vnode based on table name. - TABLE_SUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB. - TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics. - WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics.
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit. - WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册