未验证 提交 ad98bb30 编写于 作者: Y Yuting 提交者: GitHub

docs(stonedb): Update the latest documents(#409) (#410)

* docs(stonedb): Update the latest documents(#409)

* Update compile-using-centos7.md

* Delete use-mydumper-full-backup.md

The current topic is the latest.

* Delete use-mysqldump-backup-and-restore.md

The current topic online is the latest.

* Update use-navicat.md

changed the No. of steps.

* Delete use-mydumper-full-backup.md

The current topic online is the latest.

* Delete regular-change-operations.md

The current topic online is the latest.
Co-authored-by: Nhaitaoguan <105625912+haitaoguan@users.noreply.github.com>
Co-authored-by: Nxuejiao-joy <107540910+xuejiao-joy@users.noreply.github.com>
上级 2330b073
name: 📖 Documentation issue
description: Report something incorrect or missing in documentation adn help to improve our documentation
description: Report something incorrect or missing in documentation and help to improve our documentation
title: 'docs: '
labels: ["A-documentation"]
body:
......
......@@ -26,4 +26,8 @@ The following table provides descriptions of common terms.
| **data compression** | A process performed to reduce the size of data files. The data compression ratio is determined by the data type, degree of duplication, and compression algorithm. |
| **OLTP** | The acronym of online transaction processing. OLTP features quick response for interactions and high concurrency of small transactions. Typical applications are transaction systems of banks. |
| **OLAP** | The acronym of online analytical processing. OLAP features complex analytical querying on a large amount of data. Typical applications are data warehouses. |
| **HTAP** | The acronym of hybrid transaction/analytical processing. HTAP is an emerging application architecture built to allow one system for both transactions and analytics. |
\ No newline at end of file
| **HTAP** | The acronym of hybrid transaction/analytical processing. HTAP is an emerging application architecture built to allow one system for both transactions and analytics. |
| **Data Pack** | Data Packs are data storage units. Data in each column is sliced into Data Packs every 65,536 rows. |
| **Data Pack Node** | A Data Pack Node stores the following information about a Data Pack:<br />- The maximum, minimum, average, and sum of the values<br />- The number of values and the number of non-null values<br />- The compression method<br />- The length in bytes <br /> |
| **Knowledge Node** | Knowledge Nodes are at the upper layer of Data Pack Nodes. Knowledge Nodes store a collection of metadata that shows the relations between Data Packs and columns, including the range of value occurrence, data characteristics, and certain statistics. Most data stored in a Knowledge Node is generated when data is being loaded and the rest is generated during queries. |
| **Knowledge Grid** | The Knowledge Grid consists of Data Pack Nodes and Knowledge Nodes. Data Packs are compressed for storage and the cost for decompressing Data Packs is high. Therefore, the key to improving read performance is to retrieve as few as Data Packs. Knowledge Grid can help filter out irrelevant data. With Knowledge Grid, the data retrieved can be reduced to less than 1% of the total data. In most cases, the data retrieved can be loaded to memory so that the query processing efficiency can be further improved. |
......@@ -4,7 +4,6 @@ sidebar_position: 3.2
---
# Quick Deployment in a Docker Container
## Prerequisites
The image of StoneDB is downloaded from Docker Hub.
......@@ -12,12 +11,12 @@ The image of StoneDB is downloaded from Docker Hub.
## Procedure
The username and password for login are **root** and **stonedb123**.
### **1. Pull the image**
### 1. Pull the image
Run the following command:
```bash
docker pull stoneatom/stonedb:v0.1
```
### **2. Run the image**
### 2. Run the image
Run the following command:
```bash
docker run -p 13306:3306 -v $stonedb_volumn_dir/data/:/stonedb56/install/data/ -it -d stoneatom/stonedb:v0.1 /bin/bash
......@@ -28,8 +27,8 @@ docker run -p 13306:3306 -it -d stoneatom/stonedb:v0.1 /bin/bash
```
Parameter description:
- **-p**: maps ports in the *Port of the host*:*Port of the container* format.
- **-v**: mounts directories in the *Path in the host*:*Path in the host* format.If no directories ae mounted, the container will be initialized.
- **-p**: maps ports in the _Port of the host_:_Port of the container_ format.
- **-v**: mounts directories in the _Path in the host_:_Path in the container_ format. If no directories ae mounted, the container will be initialized.
- **-i**: the interaction.
- **-t**: the terminal.
- **-d**: Do not enter the container upon startup. If you want to enter the container upon startup, run the docker exec command.
......
......@@ -14,6 +14,7 @@ For more information about Prometheus, visit [https://prometheus.io/docs/introdu
To download Prometheus, visit [https://prometheus.io/download/](https://prometheus.io/download/).
### Grafana introduction
Grafana is a suite of open-source visualization and analytics tools. It enables you to query, visualize, alert on, and explore your metrics, logs, and traces wherever they are stored, as well as share them to your teams. It is most commonly used to analyze and visualize time series data collected from infrastructure and applications.
For more information, visit [https://grafana.com/docs/grafana/latest/introduction/](https://grafana.com/docs/grafana/latest/introduction/).
### Environment introduction
Machine A: on which Prometheus and Grafana are deployed in Docker containers
......@@ -66,8 +67,9 @@ docker run -d --restart=always --name=prometheus -p 9090:9090 \
--storage.tsdb.retention.time=30d
```
If `"http://<IP address of machine A>:9090"` appears, Prometheus is successfully deployed. If the deployment fails, run the `docker logs <Container ID>` command to view logs and rectify the fault.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Prometheus.png)
If "http://<IP address of machine A>:9090" appears, Prometheus is successfully deployed. If the deployment fails, run the `docker logs <Container ID>` command to view logs and rectify the fault.
![image.png](./Prometheus.png)
## Step 2. **Deploy Prometheus Exporter**
The following example shows how to monitor the OS and a MySQL database.
......@@ -163,7 +165,7 @@ stdout_logfile = /var/log/supervisor/mysqld_exporter.log
systemctl restart supervisord
```
# **Step 3. Configure Prometheus to monitor mysqld_exporter and node_exporter**
## **Step 3. Configure Prometheus to monitor mysqld_exporter and node_exporter**
1. Complete the following configuration:
```shell
......@@ -217,7 +219,7 @@ docker restart 892d640f51b2
3. Wait a while and view** State** on the** Targets** page of Prometheus. If the value for every metric is **up**, performance data is collected.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Prometheus_Status_Targets.png)
![image.png](./Prometheus_Status_Targets.png)
## **Step 4. Deploy Grafana**
1. Deploy Grafana in a Docker container.
......@@ -249,48 +251,54 @@ docker run -d --restart=always --name=grafana -p 13000:3000 \
-v /home/zsp/grafana/data/grafana/:/var/lib/grafana/ grafana/grafana
```
3. Visit `http://<IP address of machine A>:13000` and log in to Grafana. The default username and password are **admin** and **admin**.
3. Visit http://<IP address of machine A>:13000 and log in to Grafana. The default username and password are **admin** and **admin**.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Grafana.png)
![image.png](./Grafana.png)
## **Step 5. Configure Grafana to display monitoring data from Prometheus**
### **Configure the Prometheus data source**
1. Log in to Grafana. In the left-side navigation pane, choose **Configuration** > **Data sources**.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Prometheus_data_source.png)
![image.png](./Prometheus_data_source.png)
2. In the **Time series databases** area, click **Prometheus**.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Prometheus_add_data_source.png)
![image.png](./Prometheus_add_data_source.png)
3. Configure **URL** and **Scrape Interval**.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Prometheus_settings.png)
![image.png](./Prometheus_settings.png)
4. Click **Save & test**. If message "Data source is working" is displayed, the data source is configured.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Prometheus_save.png)
![image.png](./Prometheus_save.png)
### **Configure Grafana Monitoring Dashboards**
On the **Configuration** page, choose **+** > **Import**, and import the official dashboards. You can customize dashboards based on your needs.
Link to obtaining official dashboards: [https://grafana.com/grafana/dashboards/](https://grafana.com/grafana/dashboards/)
The following procedure shows how to configure dashboard 11074 to monitor node_exporter.
1. In the left-side navigation pane, choose **+** > **Import**.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Grafana_import1.png)
![image.png](./Grafana_import1.png)
2. In the **Import via Grafana.com** text box, enter the dashboard ID and click** Load**. In this example, the dashboard ID is 11074 which is a dashboard for monitoring OSs.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Grafana_import2.png)
![image.png](./Grafana_import2.png)
3. In the **VictoriaMetrics** drop-down list, select **Prometheus**.
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Grafana_import3.png)
![image.png](./Grafana_import3.png)
The procedure to configure the monitoring dashboard for a MySQL database is similar to the previous example.
Following are some example screenshots after the monitoring dashboard 1132 is configured.
Screenshot 1:
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Mysql_setting1.png)
![image.png](./Mysql_setting1.png)
Screenshot 2:
![image.png](../../../website/i18n/zh/docusaurus-plugin-content-docs/current/03-O%26M-Guide/00-monitoring-and-alerting/Mysql_setting2.png)
![image.png](./Mysql_setting2.png)
---
id: use-mydumper-full-backup
sidebar_position: 4.32
---
# Use Mydumper for Full Backup
## Mydumper introduction
Mydumper is a logical backup tool for MySQL. It consists of two parts:
- mydumper: exports consistent backup files of MySQL databases.
- myloader: reads backups from mydumper, connects to destination databases, and imports backups.
Both parts require multithreading capacities.
### Benefits
- Parallelism and performance: The tool provides high backup rate. Expensive character set conversion routines are avoided and the overall high efficiency of code is ensured.
- Simplified output management: Separate files are used for tables and metadata is dumped, simplifying data view and parse.
- High consistency: The tool maintains snapshots across all threads and provides accurate positions of primary and secondary logs.
- Manageability: Perl Compatible Regular Expressions (PCRE) can be used to specify whether to include or exclude tables or databases.
### Features
- Multi-threaded backup, which generates multiple backup files
- Consistent snapshots for transactional and non-transactional tables
:::info
This feature is supported by versions later than 0.2.2.
:::
- Fast file compression
- Export of binlogs
- Multi-threaded recovery
:::info
This feature is supported by versions later than 0.2.1.
:::
- Function as a daemon to periodically perform snapshots and consistently records binlogs
:::info
This feature is supported by versions later than 0.5.0.
:::
- Open source (license: GNU GPLv3)
## Use Mydumper
### Parameters for mydumer
```bash
mydumper --help
Usage:
mydumper [OPTION…] multi-threaded MySQL dumping
Help Options:
-?, --help Show help options
Application Options:
-B, --database Database to dump
-o, --outputdir Directory to output files to
-s, --statement-size Attempted size of INSERT statement in bytes, default 1000000
-r, --rows Try to split tables into chunks of this many rows. This option turns off --chunk-filesize
-F, --chunk-filesize Split tables into chunks of this output file size. This value is in MB
--max-rows Limit the number of rows per block after the table is estimated, default 1000000
-c, --compress Compress output files
-e, --build-empty-files Build dump files even if no data available from table
-i, --ignore-engines Comma delimited list of storage engines to ignore
-N, --insert-ignore Dump rows with INSERT IGNORE
-m, --no-schemas Do not dump table schemas with the data and triggers
-M, --table-checksums Dump table checksums with the data
-d, --no-data Do not dump table data
--order-by-primary Sort the data by Primary Key or Unique key if no primary key exists
-G, --triggers Dump triggers. By default, it do not dump triggers
-E, --events Dump events. By default, it do not dump events
-R, --routines Dump stored procedures and functions. By default, it do not dump stored procedures nor functions
-W, --no-views Do not dump VIEWs
-k, --no-locks Do not execute the temporary shared read lock. WARNING: This will cause inconsistent backups
--no-backup-locks Do not use Percona backup locks
--less-locking Minimize locking time on InnoDB tables.
--long-query-retries Retry checking for long queries, default 0 (do not retry)
--long-query-retry-interval Time to wait before retrying the long query check in seconds, default 60
-l, --long-query-guard Set long query timer in seconds, default 60
-K, --kill-long-queries Kill long running queries (instead of aborting)
-D, --daemon Enable daemon mode
-X, --snapshot-count number of snapshots, default 2
-I, --snapshot-interval Interval between each dump snapshot (in minutes), requires --daemon, default 60
-L, --logfile Log file name to use, by default stdout is used
--tz-utc SET TIME_ZONE='+00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use --skip-tz-utc to disable.
--skip-tz-utc
--use-savepoints Use savepoints to reduce metadata locking issues, needs SUPER privilege
--success-on-1146 Not increment error count and Warning instead of Critical in case of table doesn't exist
--lock-all-tables Use LOCK TABLE for all, instead of FTWRL
-U, --updated-since Use Update_time to dump only tables updated in the last U days
--trx-consistency-only Transactional consistency only
--complete-insert Use complete INSERT statements that include column names
--split-partitions Dump partitions into separate files. This options overrides the --rows option for partitioned tables.
--set-names Sets the names, use it at your own risk, default binary
-z, --tidb-snapshot Snapshot to use for TiDB
--load-data
--fields-terminated-by
--fields-enclosed-by
--fields-escaped-by Single character that is going to be used to escape characters in the LOAD DATA stament, default: '\'
--lines-starting-by Adds the string at the begining of each row. When --load-data is usedit is added to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.
--lines-terminated-by Adds the string at the end of each row. When --load-data is used it isadded to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.
--statement-terminated-by This might never be used, unless you know what are you doing
--sync-wait WSREP_SYNC_WAIT value to set at SESSION level
--where Dump only selected records.
--no-check-generated-fields Queries related to generated fields are not going to be executed.It will lead to restoration issues if you have generated columns
--disk-limits Set the limit to pause and resume if determines there is no enough disk space.Accepts values like: '<resume>:<pause>' in MB.For instance: 100:500 will pause when there is only 100MB free and willresume if 500MB are available
--csv Automatically enables --load-data and set variables to export in CSV format.
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
--defaults-file Use a specific defaults file
--stream It will stream over STDOUT once the files has been written
--no-delete It will not delete the files after stream has been completed
-O, --omit-from-file File containing a list of database.table entries to skip, one per line (skips before applying regex option)
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-h, --host The host to connect to
-u, --user Username with the necessary privileges
-p, --password User password
-a, --ask-password Prompt For User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-x, --regex Regular expression for 'db.table' matching
```
### Parameters for myloader
```bash
myloader --help
Usage:
myloader [OPTION…] multi-threaded MySQL loader
Help Options:
-?, --help Show help options
Application Options:
-d, --directory Directory of the dump to import
-q, --queries-per-transaction Number of queries per transaction, default 1000
-o, --overwrite-tables Drop tables if they already exist
-B, --database An alternative database to restore into
-s, --source-db Database to restore
-e, --enable-binlog Enable binary logging of the restore data
--innodb-optimize-keys Creates the table without the indexes and it adds them at the end
--set-names Sets the names, use it at your own risk, default binary
-L, --logfile Log file name to use, by default stdout is used
--purge-mode This specify the truncate mode which can be: NONE, DROP, TRUNCATE and DELETE
--disable-redo-log Disables the REDO_LOG and enables it after, doesn't check initial status
-r, --rows Split the INSERT statement into this many rows.
--max-threads-per-table Maximum number of threads per table to use, default 4
--skip-triggers Do not import triggers. By default, it imports triggers
--skip-post Do not import events, stored procedures and functions. By default, it imports events, stored procedures nor functions
--no-data Do not dump or import table data
--serialized-table-creation Table recreation will be executed in serie, one thread at a time
--resume Expect to find resume file in backup dir and will only process those files
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
--defaults-file Use a specific defaults file
--stream It will stream over STDOUT once the files has been written
--no-delete It will not delete the files after stream has been completed
-O, --omit-from-file File containing a list of database.table entries to skip, one per line (skips before applying regex option)
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-h, --host The host to connect to
-u, --user Username with the necessary privileges
-p, --password User password
-a, --ask-password Prompt For User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-x, --regex Regular expression for 'db.table' matching
--skip-definer Removes DEFINER from the CREATE statement. By default, statements are not modified
```
### Install and use Mydumper
```bash
# On GitHub, download the RPM package or source code package that corresponds to the machine that you use. We recommend you download the RPM package because it can be directly used while the source code package requires compilation. The OS used in the following example is CentOS 7. Therefore, download an el7 version.
[root@dev tmp]# wget https://github.com/mydumper/mydumper/releases/download/v0.12.1/mydumper-0.12.1-1-zstd.el7.x86_64.rpm
# Because the downloaded package is a ZSTD file, dependency 'libzstd' is required.
[root@dev tmp]# yum install libzstd.x86_64 -y
[root@dev tmp]#rpm -ivh mydumper-0.12.1-1-zstd.el7.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:mydumper-0.12.1-1 ################################# [100%]
# Backup library
[root@dev home]# mydumper -u root -p ******** -P 3306 -h 127.0.0.1 -B zz -o /home/dumper/
# Recovery library
[root@dev home]# myloader -u root -p ******** -P 3306 -h 127.0.0.1 -S /stonedb/install/tmp/mysql.sock -B zz -d /home/dumper
```
#### Generated backup files
```bash
[root@dev home]# ll dumper/
total 112
-rw-r--r--. 1 root root 139 Mar 23 14:24 metadata
-rw-r--r--. 1 root root 88 Mar 23 14:24 zz-schema-create.sql
-rw-r--r--. 1 root root 97819 Mar 23 14:24 zz.t_user.00000.sql
-rw-r--r--. 1 root root 4 Mar 23 14:24 zz.t_user-metadata
-rw-r--r--. 1 root root 477 Mar 23 14:24 zz.t_user-schema.sql
[root@dev dumper]# cat metadata
Started dump at: 2022-03-23 15:51:40
SHOW MASTER STATUS:
Log: mysql-bin.000002
Pos: 4737113
GTID:
Finished dump at: 2022-03-23 15:51:40
[root@dev-myos dumper]# cat zz-schema-create.sql
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `zz` /*!40100 DEFAULT CHARACTER SET utf8 */;
[root@dev dumper]# more zz.t_user.00000.sql
/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;
/*!40103 SET TIME_ZONE='+00:00' */;
INSERT INTO `t_user` VALUES(1,"e1195afd-aa7d-11ec-936e-00155d840103","kAMXjvtFJym1S7PAlMJ7",102,62,"2022-03-23 15:50:16")
,(2,"e11a7719-aa7d-11ec-936e-00155d840103","0ufCd3sXffjFdVPbjOWa",698,44,"2022-03-23 15:50:16")
.....# The content is not full displayed since it is too long.
[root@dev dumper]# cat zz.t_user-metadata
10000
[root@dev-myos dumper]# cat zz.t_user-schema.sql
/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;
/*!40103 SET TIME_ZONE='+00:00' */;
CREATE TABLE `t_user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`c_user_id` varchar(36) NOT NULL DEFAULT '',
`c_name` varchar(22) NOT NULL DEFAULT '',
`c_province_id` int(11) NOT NULL,
`c_city_id` int(11) NOT NULL,
`create_time` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `idx_user_id` (`c_user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=10001 DEFAULT CHARSET=utf8;
```
The directory contains the following files:
**metadata**: records the name and position of the binlog file of the backup database at the backup point in time.
:::info
If the backup is performed on the standby library, this file also records the name and position of the binlog file that has been synchronized from the active libary when the backup is performed.
:::
Each table has two backup files:
- **database-schema-create**: records the statements for creating the library.
- **database.table-schema.sql**: records the table schemas.
- **database.table.00000.sql**: records table data.
- **database.table-metadata**: records table metadata.
***Extensions***
If you want to import data to StoneDB, you must replace **engine=innodb** with **engine=stonedb** in table schema file **database.table-schema.sql** and check whether the syntax of the table schema is compatible with StoneDB. For example, if the syntax contain keyword **unsigned**, it is incompatible. Following is a schema example after modification:
```
[root@dev-myos dumper]# cat zz.t_user-schema.sql
/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;
/*!40103 SET TIME_ZONE='+00:00' */;
CREATE TABLE `t_user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`c_user_id` varchar(36) NOT NULL DEFAULT '',
`c_name` varchar(22) NOT NULL DEFAULT '',
`c_province_id` int(11) NOT NULL,
`c_city_id` int(11) NOT NULL,
`create_time` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=STONEDB AUTO_INCREMENT=10001 DEFAULT CHARSET=utf8;
```
### Backup principles
1. The main thread executes **FLUSH TABLES WITH READ LOCK** to add a global read-only lock to ensure data consistency.
2. The name and position of the binlog file at the current point in time are obtained and recorded to the **metadata **file to support recovery performed later.
3. Multiple (4 by default, customizable) dump threads change the isolation level for transactions to Repeatable Read and enable read-consistent transactions.
4. Non-InnoDB tables are exported.
5. After data of the non-transaction engine is backed up, the main thread executes **UNLOCK TABLES** to release the global read-only lock.
6. InnoDB tables are exported.
......@@ -4,10 +4,8 @@ sidebar_position: 4.1
---
# Regular Change Operations
## Change operations on schemas and data of tables
This section describes the supported change operations on schemas and data of tables.
### Create tables using the same schema
1. Execute a `CREATE TABLE` statement to create a StoneDB table.
......@@ -18,53 +16,50 @@ CREATE TABLE t_name(
col1 INT NOT NULL AUTO_INCREMENT,
col2 VARCHAR(10) NOT NULL,
......
PRIMARY KEY (`id`)
) engine=STONEDB;
PRIMARY KEY (`col1`)
) engine=stonedb;
```
:::info
The row-based storage engine is named StoneDB in StoneDB-5.6, and is renamed to Tianmu in StoneDB-5.7 to distinguish from the database StoneDB.
:::
2. Execute a `CREATE TABLE... LIKE `statement to create another table that uses the same schema as the table created in the step 1.
2. Execute a `CREATE TABLE ... LIKE `statement to create another table that uses the same schema as the table created in the step 1.
Code example:
Code example:
```sql
create table t_other like t_name;
```
### Clear data in a table
Execute a `TRUNCATE TABLE` statement to clear data in a table and retain the table schema.
Code example:
Execute a `TRUNCATE TABLE` statement to clear data in a table and retain the table schema.<br />Code example:
```sql
truncate table t_name;
```
### Drop a table
Code example:
```sql
drop table t_name;
```
### Add a field
Execute an `ALTER TABLE... ADD COLUMN` statement to add a field in a given table. The added field is the last field, by default.
Code example:
Execute an `ALTER TABLE ... ADD COLUMN` statement to add a field in a given table. The added field is the last field, by default. <br />Code example:
```sql
alter table t_name add column c_name varchar(10);
```
### Drop a field
Execute an `ALTER TABLE... DROP` statement to drop a field from a table.<br />Code example:
Execute an `ALTER TABLE ... DROP` statement to drop a field from a table.<br />Code example:
```sql
alter table t_name drop c_name;
```
### Rename a table
Execute an `ALTER TABLE... RENAME TO` statement to rename a given table.<br />Code example:
Execute an `ALTER TABLE ... RENAME TO` statement to rename a given table.<br />Code example:
```sql
alter table t_name rename to t_name_new;
```
## Change operations on user permissions
### Create a user
Code example:
```sql
create user 'u_name'@'hostname' identified by 'xxx';
```
### Grant user permissions
Code example:
```sql
......@@ -72,20 +67,15 @@ grant all on *.* to 'u_name'@'hostname';
grant select on db_name.* to 'u_name'@'hostname';
grant select(column_name) on db_name.t_name to 'u_name'@'hostname';
```
### Revoke permissions
Code example:
```sql
revoke all privileges on *.* from 'u_name'@'hostname';
revoke select on db_name.* from 'u_name'@'hostname';
revoke select(column_name) on db_name.t_name from 'u_name'@'hostname';
```
### Drop a user
Code example:
```sql
drop user 'u_name'@'hostname';
```
......@@ -56,6 +56,7 @@ yum install -y libedit-devel
yum install -y libaio-devel
yum install -y libicu
yum install -y libicu-devel
yum install -y jemalloc-devel
```
### Step 2. Install GCC 9.3.0
Before performing the follow-up steps, you must ensure the GCC version is 9.3.0.
......@@ -88,7 +89,7 @@ gcc --version
### Step 3. Install third-party libraries
Ensure that the CMake version in your environment is 3.7.2 or later and the Make version is 3.82 or later. Otherwise, install CMake, Make, or both of them of the correct versions.
:::info
StoneDB is dependent on marisa, RocksDB, and Boost. You can specify an installation directory for marisa, RocksDB, or Boost, when compiling it, or the library will be saved in **/usr/local **by default. In the following example, each directory is specified with an installation directory. There is also no need to specify the installation directory for StoneDB.
StoneDB is dependent on marisa, RocksDB, and Boost. You are advised to specify paths for saving the these libraries when you install them, instead of using the default paths.
:::
1. Install CMake.
......@@ -100,17 +101,19 @@ cd cmake-3.7.2
/usr/local/bin/cmake --version
rm -rf /usr/bin/cmake
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version
```
2. Install Make.
```shell
http://mirrors.ustc.edu.cn/gnu/make/
wget http://mirrors.ustc.edu.cn/gnu/make/make-3.82.tar.gz
tar -zxvf make-3.82.tar.gz
cd make-3.82
./configure --prefix=/usr/local/make
make && make install
rm -rf /usr/local/bin/make
ln -s /usr/local/make/bin/make /usr/local/bin/make
make --version
```
3. Install marisa.
......@@ -160,13 +163,10 @@ cd boost_1_66_0
```
The installation directory of Boost in the example is **/usr/local/stonedb-boost**. You can change it based on your actual conditions.
:::info
During the compilation, the occurrences of keywords **warning** and** failed** are normal, unless **error** is displayed and the CLI is automatically closed.
During the compilation, the occurrences of keywords **warning** and** failed** are normal, unless **error** is displayed and the CLI is automatically closed.<br />It takes about 25 minutes to install Boost.
:::
### Step 4. Compile StoneDB
Currently, StoneDB has two branches: StoneDB-5.6 (for MySQL 5.6) and StoneDB-5.7 (for MySQL 5.7). The link provided in this topic is to the source code package of StoneDB-5.7. In the following example,the source code package is saved to the root directory and is switched to StoneDB-5.6 for compilation.
:::info
GCC 9.3.0 or later supports the compilation of StoneDB-5.6 and allows you to specify the installation directories for RocksDB and marisa. We are working on the support for GCC 7.3.0 and for compilation of StoneDB-5.7.
:::
```shell
cd /
git clone https://github.com/stoneatom/stonedb.git
......@@ -193,8 +193,9 @@ install_target=/stonedb56/install
### Execute the compilation script.
sh stonedb_build.sh
```
### Step 5. Start StoneDB
Perform the following steps to start StoneDB.
If your OS is CentOS or RHEL, you must comment out **os_dis** and **os_dist_release**, and modify the setting of **build_tag** to exclude the **os_dist** and **os_dist_release** parts. This is because the the values of **Distributor**, **Release**, and **Codename** output of the **lsb_release -a** command are **n/a**. Commenting out **os_dist** and **os_dist_release** only affects the names of the log file and the TAR package and has no impact on the compilation results.
## **Step 5. Start StoneDB**
Users can start StoneDB in two ways: manual installation and automatic installation.
1. Create an account.
```shell
......@@ -270,4 +271,4 @@ mysql> show databases;
| test |
+--------------------+
7 rows in set (0.00 sec)
```
```
\ No newline at end of file
......@@ -55,6 +55,7 @@ yum install -y libedit-devel
yum install -y libaio-devel
yum install -y libicu
yum install -y libicu-devel
yum install -y jemalloc-devel
```
### Step 2. Install GCC 9.3.0
Before performing the follow-up steps, you must ensure the GCC version is 9.3.0.
......@@ -87,7 +88,7 @@ gcc --version
### Step 3. Install third-party libraries
Ensure that the CMake version in your environment is 3.7.2 or later and the Make version is 3.82 or later. Otherwise, install CMake, Make, or both of them of the correct versions.
:::info
StoneDB is dependent on marisa, RocksDB, and Boost. You can specify an installation directory for marisa, RocksDB, or Boost, when compiling it, or the library will be saved in **/usr/local **by default. In the following example, each directory is specified with an installation directory. There is also no need to specify the installation directory for StoneDB.
StoneDB is dependent on marisa, RocksDB, and Boost. You are advised to specify paths for saving the these libraries when you install them, instead of using the default paths.
:::
1. Install CMake.
......@@ -99,17 +100,19 @@ cd cmake-3.7.2
/usr/local/bin/cmake --version
rm -rf /usr/bin/cmake
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version
```
2. Install Make.
```shell
http://mirrors.ustc.edu.cn/gnu/make/
wget http://mirrors.ustc.edu.cn/gnu/make/make-3.82.tar.gz
tar -zxvf make-3.82.tar.gz
cd make-3.82
./configure --prefix=/usr/local/make
make && make install
rm -rf /usr/local/bin/make
ln -s /usr/local/make/bin/make /usr/local/bin/make
make --version
```
3. Install marisa.
......@@ -159,13 +162,10 @@ cd boost_1_66_0
```
The installation directory of Boost in the example is **/usr/local/stonedb-boost**. You can change it based on your actual conditions.
:::info
During the compilation, the occurrences of keywords **warning** and** failed** are normal, unless **error** is displayed and the CLI is automatically closed.
During the compilation, the occurrences of keywords **warning** and** failed** are normal, unless **error** is displayed and the CLI is automatically closed.<br />It takes about 25 minutes to install Boost.
:::
### Step 4. Compile StoneDB
Currently, StoneDB has two branches: StoneDB-5.6 (for MySQL 5.6) and StoneDB-5.7 (for MySQL 5.7). The link provided in this topic is to the source code package of StoneDB-5.7. In the following example, the source code package is saved to the root directory and is switched to StoneDB-5.6 for compilation.
:::info
GCC 9.3.0 or later supports the compilation of StoneDB-5.6 and allows you to specify the installation directories for RocksDB and marisa. We are working on the support for GCC 7.3.0 and for compilation of StoneDB-5.7.
:::
```shell
cd /
git clone https://github.com/stoneatom/stonedb.git
......@@ -176,7 +176,6 @@ Before compilation, modify the compilation script as follows:
1. Change the installation directory of StoneDB based on your actual conditions. In the example, **/stonedb56/install** is used.
1. Change the installation directories of marisa, RocksDB, and Boost based on your actual conditions.
```shell
### Modify the compilation script.
cd /stonedb/scripts
......@@ -193,9 +192,9 @@ install_target=/stonedb56/install
### Execute the compilation script.
sh stonedb_build.sh
```
After the compilation is complete, a folder named **/stonedb56** is generated.
## Step 5. Start StoneDB
Perform the following steps to start StoneDB.
If your OS is CentOS or RHEL, you must comment out **os_dis** and **os_dist_release**, and modify the setting of **build_tag** to exclude the **os_dist** and **os_dist_release** parts. This is because the the values of **Distributor**, **Release**, and **Codename** output of the **lsb_release -a** command are **n/a**. Commenting out **os_dist** and **os_dist_release** only affects the names of the log file and the TAR package and has no impact on the compilation results.
## **Step 5. Start StoneDB**
Users can start StoneDB in two ways: manual installation and automatic installation.
1. Create an account.
```shell
......@@ -271,4 +270,4 @@ mysql> show databases;
| test |
+--------------------+
7 rows in set (0.00 sec)
```
```
\ No newline at end of file
......@@ -17,7 +17,6 @@ Ensure that the tools and third-party libraries used in your environment meet th
- Boost 1.66
## Procedure
### Step 1. Install the dependencies
```shell
sudo apt install -y gcc
sudo apt install -y g++
......@@ -53,7 +52,7 @@ sudo apt install -y libreadline-dev
sudo apt install -y libpam0g-dev
sudo apt install -y zlib1g-dev
sudo apt install -y libicu-dev
sudo apt install -y libboost-all-dev
sudo apt install -y libboost-dev
sudo apt install -y libgflags-dev
sudo apt install -y libjemalloc-dev
sudo apt install -y libssl-dev
......@@ -62,10 +61,10 @@ sudo apt install -y pkg-config
:::info
Ensure that all the dependencies are installed. Otherwise, a large number of errors will be reported.
:::
### Step 2. Install third-party dependencies
### Step 2. Install third-party dependencies
Ensure that the CMake version in your environment is 3.7.2 or later and the Make version is 3.82 or later. Otherwise, install CMake, Make, or both of them of the correct versions.
:::info
StoneDB is dependent on marisa, RocksDB, and Boost. You can specify an installation directory for marisa, RocksDB, or Boost, when compiling it, or the library will be saved in **/usr/local **by default. In the following example, each directory is specified with an installation directory. There is also no need to specify the installation directory for StoneDB.
StoneDB is dependent on marisa, RocksDB, and Boost. You are advised to specify paths for saving the these libraries when you install them, instead of using the default paths.
:::
1. Install CMake.
......@@ -77,11 +76,12 @@ cd cmake-3.7.2
/usr/local/bin/cmake --version
apt remove cmake -y
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version
```
2. Install Make.
```shell
http://mirrors.ustc.edu.cn/gnu/make/
wget http://mirrors.ustc.edu.cn/gnu/make/make-3.82.tar.gz
tar -zxvf make-3.82.tar.gz
cd make-3.82
./configure --prefix=/usr/local/make
......@@ -101,12 +101,11 @@ sudo make && make install
```
The installation directory of marisa in the example is** /usr/local/stonedb-marisa**. You can change it based on your actual conditions. In this step, the following directories and files are generated in **/usr/local/stonedb-marisa/lib**.
![marisa](.../../../../images/compile-stonedb-on-ubuntu2004/marisa_dir.png)
![](./marisa.png)
1. Install RocksDB.
```shell
wget shell
wget https://github.com/facebook/rocksdb/archive/refs/tags/v6.12.6.tar.gz
wget https://github.com/facebook/rocksdb/archive/refs/tags/v6.12.6.tar.gz
tar -zxvf v6.12.6.tar.gz
cd rocksdb-6.12.6
......@@ -131,7 +130,7 @@ sudo make install -j`nproc`
```
The installation directory of RocksDB in the example is **/usr/local/stonedb-gcc-rocksdb**. You can change it based on your actual conditions. In this step, the following directories and files are generated in **/usr/local/stonedb-gcc-rocksdb**.
![rocksdb](../../images/compile-stonedb-on-ubuntu2004/rocks_dir.png)
![](./rocksdb.png)
1. Install Boost.
```shell
......@@ -143,17 +142,13 @@ cd boost_1_66_0
```
The installation directory of Boost in the example is **/usr/local/stonedb-boost**. You can change it based on your actual conditions. In this step, the following directories and files are generated in **/usr/local/stonedb-boost/lib**.
![boost](../../images/compile-stonedb-on-ubuntu2004/boost_dir.png)
![image.png](./boost.png)
:::info
During the compilation, the occurrences of keywords **warning** and** failed** are normal, unless **error** is displayed and the CLI is automatically closed.
During the compilation, the occurrences of keywords **warning** and** failed** are normal, unless **error** is displayed and the CLI is automatically closed.<br />It takes about 25 minutes to install Boost.
:::
### Step 3. Compile StoneDB
Currently, StoneDB has two branches: StoneDB-5.6 (for MySQL 5.6) and StoneDB-5.7 (for MySQL 5.7). The link provided in this topic is to the source code package of StoneDB-5.7. In the following example, the source code package is saved to the root directory and is switched to StoneDB-5.6 for compilation.
:::info
GCC 9.3.0 or later supports the compilation of StoneDB-5.6 and allows you to specify the installation directories for RocksDB and marisa. We are working on the support for GCC 7.3.0 and for compilation of StoneDB-5.7.
:::
```shell
cd /
git clone https://github.com/stoneatom/stonedb.git
......@@ -180,8 +175,9 @@ install_target=/stonedb56/install
### Execute the compilation script.
sh stonedb_build.sh
```
If your OS is CentOS or RHEL, you must comment out **os_dis** and **os_dist_release**, and modify the setting of **build_tag** to exclude the **os_dist** and **os_dist_release** parts. This is because the the values of **Distributor**, **Release**, and **Codename** output of the `lsb_release -a` command are **n/a**. Commenting out **os_dist** and **os_dist_release** only affects the names of the log file and the TAR package and has no impact on the compilation results.
### Step 4. Start StoneDB
Perform the following steps to start StoneDB.
Users can start StoneDB in two ways: manual installation and automatic installation.
1. Create an account.
```shell
......@@ -257,4 +253,4 @@ mysql> show databases;
| test |
+--------------------+
7 rows in set (0.00 sec)
```
```
\ No newline at end of file
......@@ -6,16 +6,23 @@ sidebar_position: 5.21
# Use mysql to Connect to StoneDB
This topic describes how to use the MySQL command-line client named mysql to connect to StoneDB.
## Prerequisites
- mysql has been installed and its version is 5.5, 5.6, or 5.7.
- The value of environment variable **PATH** contains the directory that stores mysql.
## Procedure
1. Open mysql.
2. Specify required parameters according to the following format.
## **Prerequisites**
MySQL has been installed and its version is 5.5, 5.6, or 5.7.
## **Procedure**
Specify required parameters according to the following format.
```shell
mysql -u -p -h -P -S -A
/stonedb/install/bin/mysql -uroot -p -S /stonedb/install/tmp/mysql.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.36-StoneDB-log build-
Copyright (c) 2000, 2022 StoneAtom Group Holding Limited
No entry for terminal type "xterm";
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
```
## Parameter description
The following table describes required parameters.
......
......@@ -10,21 +10,22 @@ Navicat is a database management tool that allows you to connect to databases. Y
This topic shows you how to use Navicat to connect to StoneDB.
## Prerequisites
Navicat has been installed.
## Procedure
1. Open Navicat and choose **File** > **New Connection** > **MySQL**.
*Here's a picture to add*
![](./navicat-step1.png)
2. In the dialog box that appears, click the **General** tab, and enter the connection name, server IP address, port, username, and password. The following figure provides an example.
*Here's a picture to add*
![](./navicat-step2.png)
3. Click **Test Connection**. If message "Connection successful" appears, the connection to StoneDB is established.
*Here's a picture to add*
![](./navicat-step3.png)
:::info
You cannot use Navicat to connect to StoneDB as a super administrator ('root'@'localhost').
:::
......@@ -9,7 +9,7 @@ Create a stored procedure. For example, perform the following two steps to creat
1. Execute the following SQL statement to create a table:
```sql
CREATE TABLE t_test(
CREATE TABLE t_user(
id INT NOT NULL AUTO_INCREMENT,
first_name VARCHAR(10) NOT NULL,
last_name VARCHAR(10) NOT NULL,
......@@ -17,8 +17,11 @@ CREATE TABLE t_test(
score INT NOT NULL,
copy_id INT NOT NULL,
PRIMARY KEY (`id`)
) engine=STONEDB;
) engine=stonedb;
```
:::info
The row-based storage engine is named StoneDB in StoneDB-5.6, and is renamed to Tianmu in StoneDB-5.7 to distinguish from the database StoneDB.
:::
2. Execute the following SQL statement to create the stored procedure:
```sql
......@@ -51,13 +54,10 @@ END //
DELIMITER ;
```
Call a stored procedure. For example, execute the following SQL statement to call stored procedure **add_user**:
```sql
call add_user(1000000);
```
Drop a stored procedure. For example, execute the following SQL statement to drop stored procedure **add_user**:
```sql
drop PROCEDURE add_user;
```
......@@ -6,24 +6,22 @@ sidebar_position: 5.52
# Create and Manage a Table
Create a table. For example, execute the following SQL statement to create a table which is named **student** and consists of the **id**, **name**, **age**, and **birthday** fields:
```sql
create table student(
id int(11) primary key,
name varchar(255),
name varchar(10),
age smallint,
birthday DATE
) engine=stonedb;
) engine=stonedb;
```
:::info
The row-based storage engine is named StoneDB in StoneDB-5.6, and is renamed to Tianmu in StoneDB-5.7 to distinguish from the database StoneDB.
:::
Query the schema of a table. For example, execute the following SQL statement to query the schema of table **student**:
```sql
show create table student\G
show create table student;
```
Drop a table. For example, execute the following SQL statement to drop table **student**:
```sql
drop table student;
```
\ No newline at end of file
......@@ -6,19 +6,14 @@ sidebar_position: 5.53
# Create and Manage a View
Create a view. For example, execute the following SQL statement to create a view named **v_s**, used to query teachers aged over 18:
```sql
create view v_s as select name from teachers where age>18;
```
Check the statement used for creating a view. For example, execute the following SQL statement to check the statement used for creating view **v_s**:
```sql
show create view v_s\G
show create view v_s;
```
Drop a view. For example, execute the following SQL statement to drop view **v_s**:
```sql
drop view v_s;
```
```
\ No newline at end of file
......@@ -5,10 +5,7 @@ sidebar_position: 5.3
# DML Statements
This topic describes the DML statements supported by StoneDB.
In this topic, table **t_test** created by executing the following statement is used in all code examples.
This topic describes the DML statements supported by StoneDB.<br />In this topic, table **t_test** created by executing the following statement is used in all code examples.
```sql
CREATE TABLE t_test(
id INT NOT NULL AUTO_INCREMENT,
......@@ -18,9 +15,11 @@ CREATE TABLE t_test(
score INT NOT NULL,
copy_id INT NOT NULL,
PRIMARY KEY (`id`)
) engine=STONEDB;
) engine=stonedb;
```
:::info
The row-based storage engine is named StoneDB in StoneDB-5.6, and is renamed to Tianmu in StoneDB-5.7 to distinguish from the database StoneDB.
:::
## INSERT
```sql
insert into t_test values(1,'jack','rose','0',58,1);
......@@ -40,4 +39,4 @@ insert into t_test1 values(1,'Bond','Jason','1',47,10) on duplicate key update l
```
:::info
The logic of this syntax is to insert a row of data. The UPDATE statement is executed only if a primary key constraint or unique constraint conflict occurs.
:::
:::
\ No newline at end of file
......@@ -4,26 +4,26 @@ sidebar_position: 5.4
---
# Statements for Queries
## Statements for common queries
### UNION/UNION ALL
## **Statements for common queries**
### **UNION/UNION ALL**
```sql
select first_name from t_test1
union all
select first_name from t_test2;
```
### DISTINCT
### **DISTINCT**
```sql
select distinct first_name from t_test1;
```
### LIKE
### **LIKE**
```sql
select * from t_test where first_name like 'zhou%';
```
### GROUP BY/ORDER BY
### **GROUP BY/ORDER BY**
```sql
select first_name,count(*) from t_test1 group by first_name order by 2;
```
### HAVING
### **HAVING**
```sql
select e.id, count(e.id), round(avg(e.score), 2)
from t_test1 e
......@@ -40,28 +40,28 @@ select sum(score) from t_test;
select * from t_test1 limit 10;
select * from t_test1 limit 10,10;
```
## Statements used for join queries
### INNER JOIN
## **Statements used for join queries**
### **INNER JOIN**
```sql
select t1.id,t1.first_name,t2.last_name from t_test1 t1,t_test2 t2 where t1.id = t2.id;
```
### LEFT JOIN
### **LEFT JOIN**
```sql
select t1.id,t1.first_name,t2.last_name from t_test1 t1 left join t_test2 t2 on t1.id = t2.id and t1.id=100;
```
### RIGHT JOIN
### **RIGHT JOIN**
```sql
select t1.id,t1.first_name,t2.last_name from t_test1 t1 right join t_test2 t2 on t1.id = t2.id and t1.id=100;
```
## Statements used for subqueries
### Statement for scalar subqueries
## **Statements used for subqueries**
### **Statement for scalar subqueries**
```sql
select e.id,
e.first_name,
(select d.first_name from t_test2 d where d.id = e.id) as first_name
from t_test1 e;
```
### Statement for derived subqueries
### **Statement for derived subqueries**
```sql
select a.first_name, b.last_name
from t_test1 a, (select id,last_name from t_test2) b
......@@ -71,7 +71,7 @@ where a.id = b.id;
```sql
select * from t_test1 where id in(select id from t_test2);
```
### EXISTS/NOT EXISTS
## EXISTS/NOT EXISTS
```sql
select * from t_test1 A where exists (select 1 from t_test2 B where B.id = A.id);
```
......@@ -88,5 +88,3 @@ select * from t_test1 A where exists (select 1 from t_test2 B where B.id = A.id)
......@@ -13,7 +13,7 @@ sidebar_position: 1.4
**视图**:是一个虚拟表,实际不存储数据,其内容由查询定义。
**存储过程** 一组为了完成特定功能的SQL语句集,经编译后存储在数据库中,用户通过指定存储过程的名称并设置参数来执行它。
**存储过程**:一组为了完成特定功能的SQL语句集,经编译后存储在数据库中,用户通过指定存储过程的名称并设置参数来执行它。
**数据库**:是各个数据库对象的集合,数据库对象包括表、视图、存储过程等。
......@@ -35,9 +35,16 @@ sidebar_position: 1.4
**数据压缩**:减少数据文件的大小即为数据压缩,数据压缩比由数据类型、数据重复度、压缩算法决定。
**OLTP** :On-Line Transaction Processing ,指的是在线事务处理过程,其主要特征是交互式的快速响应,大并发的小型事务处理,典型的业务系统是银行交易系统。
**OLTP**:On-Line Transaction Processing ,指的是在线事务处理过程,其主要特征是交互式的快速响应,大并发的小型事务处理,典型的业务系统是银行交易系统。
**OLAP** :On-Line Analytical Processing,指的是在线分析处理过程,其主要特征是对海量数据进行复查的分析查询,典型的业务系统是数据仓库系统。
**OLAP**:On-Line Analytical Processing,指的是在线分析处理过程,其主要特征是对海量数据进行复查的分析查询,典型的业务系统是数据仓库系统。
**HTAP**:Hybrid Transaction/Analytical Processing,指的是混合事务和分析处理过程,是一种新型的应用程序架构,出现 HTAP 的目的是打破 OLTP 和 OLAP 之间的壁垒。
**Data Pack**:数据包用于存放实际数据,是最底层的数据存储单元,每列按照65536行切分成一个数据包。
**Data Pack Node**:数据包节点也称为元数据节点,记录了每个数据包中列的最大值、最小值、平均值、总和、总记录数、null 值的数量、压缩方式、占用的字节数。
**Knowledge Node**:数据包节点的上一层是知识节点,除了记录数据包之间或者列之间关系的元数据集合,比如数据包的最小值与最大值范围、列之间的关联关系外,还记录了数据特征以及更深度的统计信息。大部分的知识节点数据是装载数据的时候产生的,另外一部分是查询的时候产生的。
**Knowledge Grid**:知识网格是由数据包节点和知识节点组成的。由于数据包都是压缩存放的,所以数据读取解压的代价比较高,在查询中如何做到读取尽量少的数据包是提升效率的关键。知识网格正是起到了这样的一个作用,它能够有效的过滤查询中不符合条件的数据,以最小的代价定位以数据包为最小单位的数据。知识网格的数据大小只占数据总量的1%以下,通常情况下可以加载到内存中,进一步提升查询效率。
---
id: supported-servers-and-OSs
id: supported-servers-and-OperatingSystems
sidebar_position: 2.1
---
# 软硬件环境推荐
StoneDB 是由石原子科技公司自主设计、研发的国内首款基于 MySQL 内核打造的开源 HTAP(Hybrid Transactional and Analytical Processing)融合型数据库,可以很好的部署和运行在 Intel x86 架构的64位通用硬件服务器平台,并支持绝大多数的主流硬件网络设备和主流的 Linux 操作系统环境。
## 支持的硬件平台
| 硬件平台类型 | 列表 |
| --- | --- |
......@@ -22,4 +24,4 @@ StoneDB 在 ARM64 架构和 Power 架构的支持,目前还在测试中,还
:::info
StoneDB 在以上三个操作系统的常用版本已得到验证,其他操作系统版本,如 Debian、Fedora 可能能编译成功,但还未得到完全验证。
:::
\ No newline at end of file
:::
......@@ -4,11 +4,11 @@ sidebar_position: 3.2
---
# Docker快速部署StoneDB
## StoneDB Docker Hub地址
[Docker Hub](https://hub.docker.com/r/stoneatom/stonedb)
## 使用方法
默认登录账号密码为root,stonedb123
默认登录账号密码为 root,stonedb123
### 1、docker pull
```bash
docker pull stoneatom/stonedb:v0.1
......@@ -16,13 +16,16 @@ docker pull stoneatom/stonedb:v0.1
### 2、docker run
参数说明:
-p:端口映射,把容器端口映射到宿主机端口上。如`-p 23306:3306`,前面是宿主机端口,后面是容器端口
-p:端口映射,把容器端口映射到宿主机端口上,前面是宿主机端口,后面是容器端口
-v:目录挂载,如果没有挂载的话,容器重启会进行初始化,前面是宿主机映射路径,后面是容器映射路径
-v:目录挂载,如果没有挂载的话,容器重启会进行初始化。如`-v $stonedb_volumn_dir/data/:/stonedb56/install/data/`,前面是宿主机映射路径,后面是容器映射路径
-i:交互式操作
-i:交互式操作>-t:终端
-t:终端
-d:启动不进入容器,想要进入容器需要使用指令 docker exec
```bash
docker run -p 13306:3306 -v $stonedb_volumn_dir/data/:/stonedb56/install/data/ -it -d stoneatom/stonedb:v0.1 /bin/bash
```
......@@ -30,16 +33,17 @@ or
```bash
docker run -p 13306:3306 -it -d stoneatom/stonedb:v0.1 /bin/bash
```
### 3、登录容器内使用stonedb
### 3、登录容器内使用 StoneDB
```bash
#获取docker 容器ID
docker ps
#通过docker ps获取容器ID 进入docker容器
docker exec -it 容器ID bash
#获取 docker ID
$ docker ps
#通过 docker ps 获取 docker ID,进入容器
$ docker exec -it 容器ID bash
容器ID$ /stonedb56/install/bin/mysql -uroot -pstonedb123
```
### 4、容器外登录stonedb
使用mysql client 登录,其他第三方工具(例如Navicat、DBEAVER)登录方式等类似
### 4、容器外登录StoneDB
使用客户端登录,其他第三方工具,如 Navicat、DBeaver 登录方式类似
```shell
mysql -h宿主机IP -uroot -pstonedb123 -P宿主机映射端口
```
\ No newline at end of file
......@@ -11,19 +11,25 @@ sidebar_position: 4.21
Prometheus 官方下载地址:[https://prometheus.io/download/](https://prometheus.io/download/)
Grafana 介绍:
Grafana 是一个开源的监控数据分析和可视化套件。最常用于对基础设施和应用数据分析的时间序列数据进行可视化分析,也可以用于其他需要数据可视化分析的领域。Grafana 可以帮助你查询、可视化、告警、分析你所在意的指标和数据。可以与整个团队共享,有助于培养团队的数据驱动文化。
Grafana 是一个开源的监控数据分析和[可视化](https://so.csdn.net/so/search?q=%E5%8F%AF%E8%A7%86%E5%8C%96&spm=1001.2101.3001.7020)套件。最常用于对基础设施和应用数据分析的时间序列数据进行可视化分析,也可以用于其他需要数据可视化分析的领域。Grafana 可以帮助你查询、可视化、告警、分析你所在意的指标和数据。可以与整个团队共享,有助于培养团队的数据驱动文化。
具体介绍可以查看官网介绍:[Grafana 简介](https://grafana.com/docs/grafana/latest/introduction/)
Grafana 官网地址: [https://grafana.com/](https://grafana.com/)
## 部署环境
A机器 Docker 部署 Prometheus+Grafana
A机器 Docker 部署 Prometheus+Grafana
B机器 mysqld_exporter+node_exporter
### 为何要挂载数据文件和配置文件?
因为本文采用docker 部署方式,docker容器重启后没有挂载出来的配置文件和数据文件有可能会被重置,为了防止监控数据丢失,需要把数据文件挂载出来,另外挂载配置文件可以在物理机上修改,重启docker即可应用到docker中,不需要进入到容器内部修改。
## 第一步:部署Prometheus
在A机器上使用docker先拉起一个Prometheus,或者从Prometheus官网下载tar包解压,拿到里面的prometheus.yml和data文件,放置到指定目录。本文把data和yml文件分别放置到/home/prometheus下的data/和config/文件下。
因为本文采用 docker 部署方式,docker 容器重启后没有挂载出来的配置文件和数据文件有可能会被重置,为了防止监控数据丢失,需要把数据文件挂载出来,另外挂载配置文件可以在物理机上修改,重启 docker 即可应用到 docker 中,不需要进入到容器内部修改。
<p id="1"></p>
## 第一步:部署 Prometheus
在A机器上使用 docker 先拉起一个 Prometheus,或者从 Prometheus 官网下载 tar 包解压,拿到里面的 prometheus.yml 和 data 文件,放置到指定目录。本文把 data 和 yml 文件分别放置到 /home/prometheus 下的data/ 和config/ 文件下。
```shell
# 先启动一个没有挂载没有端口映射的Prometheus
# 先启动一个没有挂载没有端口映射的 Prometheus
docker run -d \
prom/prometheus
......@@ -33,7 +39,7 @@ mkdir -p /home/prometheus/config/
docker ps
docker cp 3fe0e3ea2aa5:/etc/prometheus/prometheus.yml /home/prometheus/config/
docker cp 3fe0e3ea2aa5:/prometheus /home/prometheus/data/
# data文件夹需要设置下权限,否则挂载进去会出现数据写入权限不足的问题
# data 文件夹需要设置下权限,否则挂载进去会出现数据写入权限不足的问题
chmod 777 /home/zsp/prometheus/data/*
cd /home/zsp/prometheus/
......@@ -50,7 +56,7 @@ tree
├── 00000000
└── 00000001
```
重新启动一个新容器并设置挂载数据目录和配置文件
重新启动一个新容器并设置挂载数据目录和配置文件
```shell
docker run -d --restart=always --name=prometheus -p 9090:9090 \
-v /home/prometheus/config/prometheus.yml:/etc/prometheus/prometheus.yml \
......@@ -63,28 +69,32 @@ docker run -d --restart=always --name=prometheus -p 9090:9090 \
--storage.tsdb.retention.time=30d
```
查看A机器IP:9090 即部署成功,不成功请使用docker logs 容器ID查看错误日志进行排查
![image.png](Prometheus.png)
## 第二步:部署exporter
以MySQL和操作系统监控为示例:
建议使用supervisord进程来控制监控端的exporter服务行为。
supervisord使用方法参考:[https://www.jianshu.com/p/0b9054b33db3](https://www.jianshu.com/p/0b9054b33db3)
### 部署node_exporter
下载解压node_exporter
查看A机器 IP:9090 即部署成功,不成功请使用 docker logs 容器 ID 查看错误日志进行排查
![image.png](./Prometheus.png)
## 第二步:部署 exporter
以 MySQL 和操作系统监控为示例:
建议使用 supervisord 进程来控制监控端的 exporter 服务行为。
supervisord 使用方法参考:[https://www.jianshu.com/p/0b9054b33db3](https://www.jianshu.com/p/0b9054b33db3)
### 部署 node_exporter
下载解压 node_exporter
```shell
wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
tar -zxvf node_exporter-1.3.1.linux-amd64.tar.gz
mv node_exporter-1.3.1.linux-amd64 node_exporter
mv node_exporter /usr/local/
cd /usr/local/node_exporter/
# 启动 node_exporter测试是否正常
# 启动 node_exporter 测试是否正常
./node_exporter
# 启动新终端界面查询
# 启动新终端界面查询
ss -nltp |grep 9100
LISTEN 0 128 :::9100 :::* users:(("node_exporter",pid=17268,fd=3))
```
使用supervisor管理node_exporter进程
使用 supervisor 管理 node_exporter 进程。
```shell
vi /etc/supervisord.d/node_exporter.ini
......@@ -103,8 +113,8 @@ stdout_logfile = /var/log/supervisor/node_exporter.log
systemctl restart supervisord
```
### 部署mysqld_exporter
登录机器B mysql数据库设置mysql监控账号
### 部署 mysqld_exporter
登录机器B设置 mysql 监控账号
```shell
GRANT REPLICATION CLIENT, PROCESS ON . TO 'exporter'@'localhost' identified by 'exporter@123';
GRANT SELECT ON performance_schema.* TO 'exporter'@'localhost';
......@@ -118,22 +128,22 @@ mv mysqld_exporter-0.14.0.linux-amd64 mysqld_exporter
mv mysqld_exporter /usr/local/
cd /usr/local/mysqld_exporter/
```
设置.my.cnf监控账号登录配置文件
设置 my.cnf 监控账号登录配置文件。
```bash
vi .my.cnf
[client]
user=exporter
password=exporter@123
```
启动mysqld_exporter监控
启动 mysqld_exporter 监控。
```bash
# 测试exporter是否可以正常工作
# 测试 exporter 是否可以正常工作
/usr/local/mysqld_exporter/mysqld_exporter --config.my-cnf=/usr/local/mysqld_exporter/.my.cnf
ss -nltp |grep 9104
LISTEN 0 128 :::9104 :::* users:(("mysqld_exporter",pid=17266,fd=3))
```
使用supervisor管理mysqld_exporter进程
使用 supervisor 管理 mysqld_exporter 进程。
```shell
cat /etc/supervisord.d/mysqld_exporter.ini
......@@ -153,7 +163,7 @@ stdout_logfile = /var/log/supervisor/mysqld_exporter.log
systemctl restart supervisord
```
## 第三步:配置Prometheus监听mysqld_exporter和node_exporter
## 第三步:配置 Prometheus 监听 mysqld_exporter 和 node_exporter
```shell
# my global config
global:
......@@ -197,16 +207,17 @@ scrape_configs:
labels:
instance: B机器_stonedb
```
添加mysqld_exporter 和node_exporter后,docker需要重启容器加载下配置文件。
添加 mysqld_exporter 和 node_exporter 后,docker 需要重启容器加载下配置文件。
```shell
docker restart 892d640f51b2
```
稍等一会儿查看Prometheus 上的Status-Targets状态(这一步忘记截图了,使用其他环境截图代替),metrics全为up则表示监控采集metrics采集成功
![image.png](Prometheus_Status_Targets.png)
## 第四步:部署Grafana
稍等一会儿查看 Prometheus 上的 Status-Targets 状态(这一步忘记截图了,使用其他环境截图代替),metrics 全为 up 则表示监控采集 metrics 采集成功。
![image.png](./Prometheus_Status_Targets.png)
## 第四步:部署 Grafana
### Docker 部署
Docker 部署Prometheus一样,需要先把Grafana里面的数据文件和配置文件挂载出来,和上面《[docker 复制config 和data文件命令](#qoziY)》 步骤
Docker 部署 Prometheus 一样,需要先把 Grafana 里面的数据文件和配置文件挂载出来,和上面《[docker 复制config 和data文件命令](#1)》 步骤
```shell
docker run -d --name=grafana -p 13000:3000 grafana/grafana
docker cp c2bfbdd0827f:/etc/grafana/grafana.ini /home/zsp/grafana/config/
......@@ -225,27 +236,44 @@ tree
├── plugins
└── png
```
启动Grafana
启动 Grafana
```shell
docker run -d --restart=always --name=grafana -p 13000:3000 \
-v /home/zsp/grafana/config/grafana.ini:/etc/grafana/grafana.ini \
-v /home/zsp/grafana/data/grafana/:/var/lib/grafana/ grafana/grafana
```
启动后访问:[http://A机器IP:13000/](http://192.168.30.101:13000/)
初始账号密码为:admin admin
![image.png](Grafana.png)
## 第五步:配置Grafana展示Prometheus监控数据
### 配置Prometheus数据源
![image.png](Prometheus_data_source.png)![image.png](Prometheus_add_data_source.png)
一般设置这两个地方就可以了![image.png](Prometheus_settings.png)
点击底部Save & test出现Data source is working即可![image.png](Prometheus_save.png)
### 配置Grafana 监控图表
通过import 导入官方图表,也可自定义添加自己需要的图表,官方监控图表查找地址:[https://grafana.com/grafana/dashboards/](https://grafana.com/grafana/dashboards/)
本文展示node_exporter图表编号为 11074
和mysql 图表编号为 11323
![image.png](Grafana_import1.png)
![image.png](Grafana_import2.png)
![image.png](Grafana_import3.png)
Mysql 配置同理,不做过多截图,以下是展示效果截图
![image.png](Mysql_setting1.png)
![image.png](Mysql_setting2.png)
初始账号为:admin
初始密码为:admin
![image.png](./Grafana.png)
## 第五步:配置 Grafana 展示 Prometheus 监控数据
### 配置 Prometheus 数据源
![image.png](./Prometheus_data_source.png)
![image.png](./Prometheus_add_data_source.png)
一般设置这两个地方就可以了。
![image.png](./Prometheus_settings.png)
点击底部 Save & test 出现 Data source is working 即可。
![image.png](./Prometheus_save.png)
### 配置 Grafana 监控图表
通过 import 导入官方图表,也可自定义添加自己需要的图表,官方监控图表查找地址:[https://grafana.com/grafana/dashboards/](https://grafana.com/grafana/dashboards/)
本文展示 node_exporter 图表编号为 11074,mysql 图表编号为 11323。
![image.png](./Grafana_import1.png)
![image.png](./Grafana_import2.png)
![image.png](./Grafana_import3.png)
MySQL 配置同理,不做过多截图,以下是展示效果截图。
![image.png](./Mysql_setting1.png)
![image.png](./Mysql_setting2.png)
---
id: use-mydumper-full-backup
sidebar_position: 4.32
---
# MySQL全量数据备份-mydumper
mydumper项目地址:[https://github.com/mydumper/mydumper](https://github.com/mydumper/mydumper)
## Mydumper介绍
### 什么是Mydumper?
Mydumper 是一个 MySQL 逻辑备份工具。它有 2 个工具:
- mydumper负责导出 MySQL 数据库的一致备份
- myloader从 mydumper 读取备份,连接到目标数据库并导入备份。两种工具都使用多线程功能
### Mydumper优势
- 并行性(因此,速度)和性能(避免昂贵的字符集转换例程,整体高效的代码)
- 更易于管理输出(表的单独文件、转储元数据等,易于查看/解析数据)
- 一致性 - 维护所有线程的快照,提供准确的主从日志位置等
- 可管理性 - 支持 PCRE 以指定数据库和表的包含和排除
### Mydumper主要特性
- 多线程备份,备份后会生成多个备份文件
- 事务性和非事务性表一致的快照(适用于0.2.2以上版本)
- 快速的文件压缩
- 支持导出binlog
- 多线程恢复(适用于0.2.1以上版本)
- 以守护进程的工作方式,定时快照和连续二进制日志(适用于0.5.0以上版本)
- 开源 (GNU GPLv3)
## Mydumper使用
### Mydumer 参数
```bash
mydumper --help
Usage:
mydumper [OPTION…] multi-threaded MySQL dumping
Help Options:
-?, --help Show help options
Application Options:
-B, --database Database to dump
-o, --outputdir Directory to output files to
-s, --statement-size Attempted size of INSERT statement in bytes, default 1000000
-r, --rows Try to split tables into chunks of this many rows. This option turns off --chunk-filesize
-F, --chunk-filesize Split tables into chunks of this output file size. This value is in MB
--max-rows Limit the number of rows per block after the table is estimated, default 1000000
-c, --compress Compress output files
-e, --build-empty-files Build dump files even if no data available from table
-i, --ignore-engines Comma delimited list of storage engines to ignore
-N, --insert-ignore Dump rows with INSERT IGNORE
-m, --no-schemas Do not dump table schemas with the data and triggers
-M, --table-checksums Dump table checksums with the data
-d, --no-data Do not dump table data
--order-by-primary Sort the data by Primary Key or Unique key if no primary key exists
-G, --triggers Dump triggers. By default, it do not dump triggers
-E, --events Dump events. By default, it do not dump events
-R, --routines Dump stored procedures and functions. By default, it do not dump stored procedures nor functions
-W, --no-views Do not dump VIEWs
-k, --no-locks Do not execute the temporary shared read lock. WARNING: This will cause inconsistent backups
--no-backup-locks Do not use Percona backup locks
--less-locking Minimize locking time on InnoDB tables.
--long-query-retries Retry checking for long queries, default 0 (do not retry)
--long-query-retry-interval Time to wait before retrying the long query check in seconds, default 60
-l, --long-query-guard Set long query timer in seconds, default 60
-K, --kill-long-queries Kill long running queries (instead of aborting)
-D, --daemon Enable daemon mode
-X, --snapshot-count number of snapshots, default 2
-I, --snapshot-interval Interval between each dump snapshot (in minutes), requires --daemon, default 60
-L, --logfile Log file name to use, by default stdout is used
--tz-utc SET TIME_ZONE='+00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use --skip-tz-utc to disable.
--skip-tz-utc
--use-savepoints Use savepoints to reduce metadata locking issues, needs SUPER privilege
--success-on-1146 Not increment error count and Warning instead of Critical in case of table doesn't exist
--lock-all-tables Use LOCK TABLE for all, instead of FTWRL
-U, --updated-since Use Update_time to dump only tables updated in the last U days
--trx-consistency-only Transactional consistency only
--complete-insert Use complete INSERT statements that include column names
--split-partitions Dump partitions into separate files. This options overrides the --rows option for partitioned tables.
--set-names Sets the names, use it at your own risk, default binary
-z, --tidb-snapshot Snapshot to use for TiDB
--load-data
--fields-terminated-by
--fields-enclosed-by
--fields-escaped-by Single character that is going to be used to escape characters in theLOAD DATA stament, default: '\'
--lines-starting-by Adds the string at the begining of each row. When --load-data is usedit is added to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.
--lines-terminated-by Adds the string at the end of each row. When --load-data is used it isadded to the LOAD DATA statement. Its affects INSERT INTO statementsalso when it is used.
--statement-terminated-by This might never be used, unless you know what are you doing
--sync-wait WSREP_SYNC_WAIT value to set at SESSION level
--where Dump only selected records.
--no-check-generated-fields Queries related to generated fields are not going to be executed.It will lead to restoration issues if you have generated columns
--disk-limits Set the limit to pause and resume if determines there is no enough disk space.Accepts values like: '<resume>:<pause>' in MB.For instance: 100:500 will pause when there is only 100MB free and willresume if 500MB are available
--csv Automatically enables --load-data and set variables to export in CSV format.
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
--defaults-file Use a specific defaults file
--stream It will stream over STDOUT once the files has been written
--no-delete It will not delete the files after stream has been completed
-O, --omit-from-file File containing a list of database.table entries to skip, one per line (skips before applying regex option)
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-h, --host The host to connect to
-u, --user Username with the necessary privileges
-p, --password User password
-a, --ask-password Prompt For User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-x, --regex Regular expression for 'db.table' matching
```
### Myloader参数
```bash
myloader --help
Usage:
myloader [OPTION…] multi-threaded MySQL loader
Help Options:
-?, --help Show help options
Application Options:
-d, --directory Directory of the dump to import
-q, --queries-per-transaction Number of queries per transaction, default 1000
-o, --overwrite-tables Drop tables if they already exist
-B, --database An alternative database to restore into
-s, --source-db Database to restore
-e, --enable-binlog Enable binary logging of the restore data
--innodb-optimize-keys Creates the table without the indexes and it adds them at the end
--set-names Sets the names, use it at your own risk, default binary
-L, --logfile Log file name to use, by default stdout is used
--purge-mode This specify the truncate mode which can be: NONE, DROP, TRUNCATE and DELETE
--disable-redo-log Disables the REDO_LOG and enables it after, doesn't check initial status
-r, --rows Split the INSERT statement into this many rows.
--max-threads-per-table Maximum number of threads per table to use, default 4
--skip-triggers Do not import triggers. By default, it imports triggers
--skip-post Do not import events, stored procedures and functions. By default, it imports events, stored procedures nor functions
--no-data Do not dump or import table data
--serialized-table-creation Table recreation will be executed in serie, one thread at a time
--resume Expect to find resume file in backup dir and will only process those files
-t, --threads Number of threads to use, default 4
-C, --compress-protocol Use compression on the MySQL connection
-V, --version Show the program version and exit
-v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
--defaults-file Use a specific defaults file
--stream It will stream over STDOUT once the files has been written
--no-delete It will not delete the files after stream has been completed
-O, --omit-from-file File containing a list of database.table entries to skip, one per line (skips before applying regex option)
-T, --tables-list Comma delimited table list to dump (does not exclude regex option)
-h, --host The host to connect to
-u, --user Username with the necessary privileges
-p, --password User password
-a, --ask-password Prompt For User password
-P, --port TCP/IP port to connect to
-S, --socket UNIX domain socket file to use for connection
-x, --regex Regular expression for 'db.table' matching
--skip-definer Removes DEFINER from the CREATE statement. By default, statements are not modified
```
### 安装使用
```bash
#到项目github 上下载机器对应的rpm包或者源码包,源码包需要进行编译,rpm包安装简单建议使用,本文以centos 7系统为例,所以下载el7版本
[root@dev tmp]# wget https://github.com/mydumper/mydumper/releases/download/v0.12.1/mydumper-0.12.1-1-zstd.el7.x86_64.rpm
#由于下载的mydumper是zstd类型的,所以需要下载libzstd依赖
[root@dev tmp]# yum install libzstd.x86_64 -y
[root@dev tmp]#rpm -ivh mydumper-0.12.1-1-zstd.el7.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:mydumper-0.12.1-1 ################################# [100%]
#备份库
[root@dev home]# mydumper -u root -p xxx -P 3306 -h 127.0.0.1 -B zz -o /home/dumper/
#恢复库
[root@dev home]# myloader -u root -p xxx -P 3306 -h 127.0.0.1 -S /stonedb/install/tmp/mysql.sock -B zz -d /home/dumper
```
**备份所生成的文件**
```bash
[root@dev home]# ll dumper/
total 112
-rw-r--r--. 1 root root 139 Mar 23 14:24 metadata
-rw-r--r--. 1 root root 88 Mar 23 14:24 zz-schema-create.sql
-rw-r--r--. 1 root root 97819 Mar 23 14:24 zz.t_user.00000.sql
-rw-r--r--. 1 root root 4 Mar 23 14:24 zz.t_user-metadata
-rw-r--r--. 1 root root 477 Mar 23 14:24 zz.t_user-schema.sql
[root@dev dumper]# cat metadata
Started dump at: 2022-03-23 15:51:40
SHOW MASTER STATUS:
Log: mysql-bin.000002
Pos: 4737113
GTID:
Finished dump at: 2022-03-23 15:51:40
[root@dev-myos dumper]# cat zz-schema-create.sql
CREATE DATABASE /*!32312 IF NOT EXISTS*/ `zz` /*!40100 DEFAULT CHARACTER SET utf8 */;
[root@dev dumper]# more zz.t_user.00000.sql
/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;
/*!40103 SET TIME_ZONE='+00:00' */;
INSERT INTO `t_user` VALUES(1,"e1195afd-aa7d-11ec-936e-00155d840103","kAMXjvtFJym1S7PAlMJ7",102,62,"2022-03-23 15:50:16")
,(2,"e11a7719-aa7d-11ec-936e-00155d840103","0ufCd3sXffjFdVPbjOWa",698,44,"2022-03-23 15:50:16")
.....#内容过多不全部展示
[root@dev dumper]# cat zz.t_user-metadata
10000
[root@dev-myos dumper]# cat zz.t_user-schema.sql
/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;
/*!40103 SET TIME_ZONE='+00:00' */;
CREATE TABLE `t_user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`c_user_id` varchar(36) NOT NULL DEFAULT '',
`c_name` varchar(22) NOT NULL DEFAULT '',
`c_province_id` int(11) NOT NULL,
`c_city_id` int(11) NOT NULL,
`create_time` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `idx_user_id` (`c_user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=10001 DEFAULT CHARSET=utf8;
```
目录
metadata文件
- 记录了备份数据库在备份时间点的二进制日志文件名,日志的写入位置,
- 如果是在从库进行备份,还会记录备份时同步至主库的二进制日志文件及写入位置
每个表有两个备份文件:
database-schema-create 库创建语句文件
database.table-schema.sql 表结构文件
database.table.00000.sql 表数据文件
database.table-metadata 表元数据文件
***扩展***
如果要导入数据到StoneDB,需要把Mydumper的database.table-schema.sql 表结构文件中建表语句engine=innodb 改成 engine=stonedb,并检查表结构是否有StoneDB不兼容的语法:类似unsigned 之类的限制。修改后结构示例:
```
[root@dev-myos dumper]# cat zz.t_user-schema.sql
/*!40101 SET NAMES binary*/;
/*!40014 SET FOREIGN_KEY_CHECKS=0*/;
/*!40103 SET TIME_ZONE='+00:00' */;
CREATE TABLE `t_user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`c_user_id` varchar(36) NOT NULL DEFAULT '',
`c_name` varchar(22) NOT NULL DEFAULT '',
`c_province_id` int(11) NOT NULL,
`c_city_id` int(11) NOT NULL,
`create_time` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=STONEDB AUTO_INCREMENT=10001 DEFAULT CHARSET=utf8;
```
### 备份原理
- 主线程 FLUSH TABLES WITH READ LOCK, 施加全局只读锁,保证数据的一致性
- 读取当前时间点的二进制日志文件名和日志写入的位置并记录在metadata文件中,以供全量恢复后追加binlog恢复使用
- N个(线程数可以指定,默认是4)dump线程把事务隔离级别改为可重复读 并开启一致性读事务
- dump non-InnoDB tables, 首先导出非事物引擎的表
- 主线程 UNLOCK TABLES 非事物引擎备份完后,释放全局只读锁
- dump InnoDB tables, 基于事物导出InnoDB表
- 事物结束
---
id: regular-change-operations
sidebar_position: 4.1
---
# 常规变更
## 表结构变更
StoneDB只支持以下表结构变更和数据变更操作,其它没有说明的表示不支持。
### 创建相似表
1)创建一张引擎为 stonedb 、名为t_name的表;
```sql
CREATE TABLE t_name(
col1 INT NOT NULL AUTO_INCREMENT,
col2 VARCHAR(10) NOT NULL,
......
PRIMARY KEY (`id`)
) engine=STONEDB;
```
2)创建一张与t_name结构相同的表t_other
使用 Create table like 语句
```
create table t_other like t_name;
```
### 清空表数据
使用truncate table 语句可以实现保留表结构,仅清空表中的数据
```
truncate table t_name;
```
### 添加字段
使用 alter table ... add column 语句实现向指定表中添加字段,新增加的字段默认置于最后一个字段。
```
alter table t_name add column c_name varchar(10);
```
### 删除字段
使用 alter table ... drop 语句实现删除表中指定字段。
```
alter table t_name drop c_name;
```
### 重命名表
使用 alter table ... rename to 语句实现对指定表的重命名。
```
alter table t_name rename to t_name_new;
```
## 用户权限变更
### 创建用户
```
create user 'u_name'@'hostname' identified by 'xxx';
```
### 给用户赋权
```
grant all on *.* to 'u_name'@'hostname';
grant select on db_name.* to 'u_name'@'hostname';
grant select(column_name) on db_name.t_name to 'u_name'@'hostname';
```
### 回收用户权限
```
revoke all privileges on *.* from 'u_name'@'hostname';
revoke select on db_name.* from 'u_name'@'hostname';
revoke select(column_name) on db_name.t_name from 'u_name'@'hostname';
```
### 删除用户
```
drop user 'u_name'@'hostname';
```
......@@ -55,6 +55,7 @@ yum install -y libedit-devel
yum install -y libaio-devel
yum install -y libicu
yum install -y libicu-devel
yum install -y jemalloc-devel
```
## 第二步:安装 gcc 9.3.0
通过执行以下语句,检查当前 gcc 版本是否符合安装要求。
......@@ -79,7 +80,7 @@ scl enable devtoolset-9 bash
gcc --version
```
## 第三步:安装第三方库
安装第三库前需要确认 cmake 版本是3.7.2以上,make 版本是3.82以上,如果低于这两个版本,需要进行安装。StoneDB 依赖 marisa、rocksdb、boost,在编译 marisa、rocksdb、boost 时,可以不指定安装路径,默认安装路径在 /usr/local 下。如果不指定 marisa、rocksdb、boost 的安装路径,在编译安装 StoneDB 时也无需指定路径。示例中我们指定了 marisa、rocksdb、boost 的安装路径。
安装第三库前需要确认 cmake 版本是3.7.2以上,make 版本是3.82以上,如果低于这两个版本,需要进行安装。StoneDB 依赖 marisa、rocksdb、boost,在编译 marisa、rocksdb、boost 时,建议指定安装路径。示例中我们指定了 marisa、rocksdb、boost 的安装路径。
### 1. 安装 cmake
```shell
wget https://cmake.org/files/v3.7/cmake-3.7.2.tar.gz
......@@ -89,15 +90,18 @@ cd cmake-3.7.2
/usr/local/bin/cmake --version
rm -rf /usr/bin/cmake
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version
```
### 2. 安装 make
```shell
http://mirrors.ustc.edu.cn/gnu/make/
wget http://mirrors.ustc.edu.cn/gnu/make/make-3.82.tar.gz
tar -zxvf make-3.82.tar.gz
cd make-3.82
./configure --prefix=/usr/local/make
make && make install
rm -rf /usr/local/bin/make
ln -s /usr/local/make/bin/make /usr/local/bin/make
make --version
```
### 3. 安装 marisa
```shell
......@@ -142,13 +146,9 @@ cd boost_1_66_0
./bootstrap.sh --prefix=/usr/local/stonedb-boost
./b2 install --with=all
```
boost 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-boost。
注:在编译过程中,除非有关键字 "error" 报错自动退出,否则出现关键字 "warning"、"failed"是正常的。
boost 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-boost。<br />注:在编译过程中,除非有关键字 "error" 报错自动退出,否则出现关键字 "warning"、"failed"是正常的,安装 boost 大概需要25分钟左右。
## 第四步:执行编译
StoneDB 现有 5.6 和 5.7 两个分支,下载的源码包默认是 5.7 分支。下载的源码包存放路径可根据实际情况指定,示例中的源码包存放路径是在根目录下,并且是切换为 5.6 分支的编译安装。
注:gcc 9.3.0以上版本已经支持 5.6 的编译,并且支持自定义指定 rocksdb 和 marisa 的安装路径。5.7 的编译还是 gcc 7.3.0版本,后续会得到支持。
```shell
cd /
git clone https://github.com/stoneatom/stonedb.git
......@@ -160,7 +160,6 @@ git checkout remotes/origin/stonedb-5.6
1)StoneDB 安装目录,可根据实际情况修改,示例中的安装目录是 /stonedb56/install;
2)marisa、rocksdb、boost 的实际安装路径,必须与上文安装 marisa、rocksdb、boost 的路径保持一致。
```shell
###修改编译脚本
cd /stonedb/scripts
......@@ -177,8 +176,9 @@ install_target=/stonedb56/install
###执行编译脚本
sh stonedb_build.sh
```
注:如果是 CentOS/RedHat ,需要注释 os_dist 和 os_dist_release,并且修改 build_tag ,这是因为 "lsb_release -a" 返回的结果中,Distributor、Release、Codename 显示的是 n/a。注释 os_dist 和 os_dist_release 只会影响产生的日志名和 tar 包名,不会影响编译结果。
## 第五步:启动实例
按照以下步骤启动 StoneDB。
用户可按照手动安装和自动安装两种方式启动 StoneDB。
### 1. 创建用户
```shell
groupadd mysql
......@@ -186,7 +186,7 @@ useradd -g mysql mysql
passwd mysql
```
### 2. 手动安装
编译完成后,如果 StoneDB 安装目录不是 **/stonedb56**,不会自动生成 **reinstall.sh****install.sh****my.cnf** 文件,需要手动创建目录、初始化和启动实例。还需要配置 **my.cnf** 文件,如安装目录,端口等参数。
编译完成后,如果 StoneDB 安装目录不是 /stonedb56,不会自动生成 reinstall.sh、install.sh 和 my.cnf 文件,需要手动创建目录、初始化和启动实例。还需要配置 my.cnf 文件,如安装目录,端口等参数。
```shell
###创建目录
mkdir -p /data/stonedb56/install/data/innodb
......@@ -218,9 +218,7 @@ chown -R mysql:mysql /data/stonedb56/install/my.cnf
cd /stonedb56/install
./reinstall.sh
```
**注:reinstall.sh 与 install.sh 的区别?**
reinstall.sh 是自动化安装脚本,执行脚本的过程是创建目录、初始化实例和启动实例的过程,只在第一次使用,其他任何时候使用都会删除整个目录,重新初始化数据库。install.sh 是手动安装提供的示例脚本,用户可根据自定义的安装目录修改路径,然后执行脚本,执行脚本的过程也是创建目录、初始化实例和启动实例。以上两个脚本都只能在第一次使用。
注:reinstall.sh 与 install.sh 的区别?<br />reinstall.sh 是自动化安装脚本,执行脚本的过程是创建目录、初始化实例和启动实例的过程,只在第一次使用,其他任何时候使用都会删除整个目录,重新初始化数据库。install.sh 是手动安装提供的示例脚本,用户可根据自定义的安装目录修改路径,然后执行脚本,执行脚本的过程也是创建目录、初始化实例和启动实例。以上两个脚本都只能在第一次使用。
### 4. 执行登录
```shell
/stonedb56/install/bin/mysql -uroot -p -S /stonedb56/install/tmp/mysql.sock
......
......@@ -3,7 +3,7 @@ id: compile-using-docker
sidebar_position: 5.15
---
# Docker 编译环境搭建和使用
# Docker 编译环境搭建和使用 StoneDB
## 环境简介
由于编译环境搭建第三方库较为繁琐,且Fedora,Ubuntu等环境编译存在大量依赖缺失,需要补充安装依赖,搭建麻烦,所以搭建一个Docker Centos 编译环境容器,可以通过Docker 容器快速编译StoneDB,解决编译环境搭建繁琐问题,也可以通过Docker 容器编译后直接启动StoneDB进行调试使用。
......
......@@ -55,6 +55,7 @@ yum install -y libedit-devel
yum install -y libaio-devel
yum install -y libicu
yum install -y libicu-devel
yum install -y jemalloc-devel
```
## 第二步:安装 gcc 9.3.0
通过执行以下语句,检查当前 gcc 版本是否符合安装要求。
......@@ -79,7 +80,7 @@ scl enable devtoolset-9 bash
gcc --version
```
## 第三步:安装第三方库
安装第三库前需要确认 cmake 版本是3.7.2以上,make 版本是3.82以上,如果低于这两个版本,需要进行安装。StoneDB 依赖 marisa、rocksdb、boost,在编译 marisa、rocksdb、boost 时,可以不指定安装路径,默认安装路径在 /usr/local 下。如果不指定 marisa、rocksdb、boost 的安装路径,在编译安装 StoneDB 时也无需指定路径。示例中我们指定了 marisa、rocksdb、boost 的安装路径。
安装第三库前需要确认 cmake 版本是3.7.2以上,make 版本是3.82以上,如果低于这两个版本,需要进行安装。StoneDB 依赖 marisa、rocksdb、boost,在编译 marisa、rocksdb、boost 时,建议指定安装路径。示例中我们指定了 marisa、rocksdb、boost 的安装路径。
### 1. 安装 cmake
```shell
wget https://cmake.org/files/v3.7/cmake-3.7.2.tar.gz
......@@ -89,15 +90,18 @@ cd cmake-3.7.2
/usr/local/bin/cmake --version
rm -rf /usr/bin/cmake
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version
```
### 2. 安装 make
```shell
http://mirrors.ustc.edu.cn/gnu/make/
wget http://mirrors.ustc.edu.cn/gnu/make/make-3.82.tar.gz
tar -zxvf make-3.82.tar.gz
cd make-3.82
./configure --prefix=/usr/local/make
make && make install
rm -rf /usr/local/bin/make
ln -s /usr/local/make/bin/make /usr/local/bin/make
make --version
```
### 3. 安装 marisa
```shell
......@@ -142,16 +146,20 @@ cd boost_1_66_0
./bootstrap.sh --prefix=/usr/local/stonedb-boost
./b2 install --with=all
```
boost 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-boost。<br />注:在编译过程中,除非有关键字 "error" 报错自动退出,否则出现关键字 "warning"、"failed"是正常的。
boost 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-boost。<br />注:在编译过程中,除非有关键字 "error" 报错自动退出,否则出现关键字 "warning"、"failed"是正常的,安装 boost 大概需要25分钟左右
## 第四步:执行编译
StoneDB 现有 5.6 和 5.7 两个分支,下载的源码包默认是 5.7 分支。下载的源码包存放路径可根据实际情况指定,示例中的源码包存放路径是在根目录下,并且是切换为 5.6 分支的编译安装。<br />注:gcc 9.3.0以上版本已经支持 5.6 的编译,并且支持自定义指定 rocksdb 和 marisa 的安装路径。5.7 的编译还是 gcc 7.3.0版本,后续会得到支持。
StoneDB 现有 5.6 和 5.7 两个分支,下载的源码包默认是 5.7 分支。下载的源码包存放路径可根据实际情况指定,示例中的源码包存放路径是在根目录下,并且是切换为 5.6 分支的编译安装。
```shell
cd /
git clone https://github.com/stoneatom/stonedb.git
cd stonedb
git checkout remotes/origin/stonedb-5.6
```
在执行编译脚本前,需要修改编译脚本的两处内容:<br />1)StoneDB 安装目录,可根据实际情况修改,示例中的安装目录是 /stonedb56/install;<br />2)marisa、rocksdb、boost 的实际安装路径,必须与上文安装 marisa、rocksdb、boost 的路径保持一致。
在执行编译脚本前,需要修改编译脚本的两处内容:
1)StoneDB 安装目录,可根据实际情况修改,示例中的安装目录是 /stonedb56/install;
2)marisa、rocksdb、boost 的实际安装路径,必须与上文安装 marisa、rocksdb、boost 的路径保持一致。
```shell
###修改编译脚本
cd /stonedb/scripts
......@@ -168,8 +176,9 @@ install_target=/stonedb56/install
###执行编译脚本
sh stonedb_build.sh
```
注:如果是 CentOS/RedHat ,需要注释 os_dist 和 os_dist_release,并且修改 build_tag ,这是因为 "lsb_release -a" 返回的结果中,Distributor、Release、Codename 显示的是 n/a。注释 os_dist 和 os_dist_release 只会影响产生的日志名和 tar 包名,不会影响编译结果。
## 第五步:启动实例
按照以下步骤启动 StoneDB。
用户可按照手动安装和自动安装两种方式启动 StoneDB。
### 1. 创建用户
```shell
groupadd mysql
......
......@@ -52,17 +52,17 @@ sudo apt install -y libreadline-dev
sudo apt install -y libpam0g-dev
sudo apt install -y zlib1g-dev
sudo apt install -y libicu-dev
sudo apt install -y libboost-all-dev
sudo apt install -y libboost-dev
sudo apt install -y libgflags-dev
sudo apt install -y libjemalloc-dev
sudo apt install -y libssl-dev
sudo apt install -y pkg-config
```
:::caution
**注:依赖包必须都装上,否则后面有很多报错。**
注:依赖包必须都装上,否则后面有很多报错。
:::
## 第二步:安装第三方库
安装第三库前需要确认 cmake 版本是3.7.2以上,make 版本是3.82以上,如果低于这两个版本,需要进行安装。StoneDB 依赖 marisa、rocksdb、boost,在编译 marisa、rocksdb、boost 时,可以不指定安装路径,默认安装路径在 /usr/local 下。如果不指定 marisa、rocksdb、boost 的安装路径,在编译安装 StoneDB 时也无需指定路径。示例中我们指定了 marisa、rocksdb、boost 的安装路径。
安装第三库前需要确认 cmake 版本是3.7.2以上,make 版本是3.82以上,如果低于这两个版本,需要进行安装。StoneDB 依赖 marisa、rocksdb、boost,在编译 marisa、rocksdb、boost 时,建议指定安装路径。示例中我们指定了 marisa、rocksdb、boost 的安装路径。
### 1. 安装 cmake
```shell
wget https://cmake.org/files/v3.7/cmake-3.7.2.tar.gz
......@@ -72,11 +72,13 @@ cd cmake-3.7.2
/usr/local/bin/cmake --version
apt remove cmake -y
ln -s /usr/local/bin/cmake /usr/bin/
cmake --version
```
### 2. 安装 make
```shell
http://mirrors.ustc.edu.cn/gnu/make/
wget http://mirrors.ustc.edu.cn/gnu/make/make-3.82.tar.gz
tar -zxvf make-3.82.tar.gz
cd make-3.82
./configure --prefix=/usr/local/make
make && make install
rm -rf /usr/local/bin/make
......@@ -93,7 +95,7 @@ sudo make && make install
```
marisa 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-marisa。此步骤会在 /usr/local/stonedb-marisa/lib 下生成如下目录和文件。
![libmarisa](./libmarisa.png)
![marisa](./libmarisa.png)
### 4. 安装 rocksdb
```shell
......@@ -122,8 +124,7 @@ sudo make install -j`nproc`
```
rocksdb 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-gcc-rocksdb。此步骤会在 /usr/local/stonedb-gcc-rocksdb 下生成如下目录和文件。
![librocksdb](./librocksdb.png)
![rocksdb](./librocksdb.png)
### 5. 安装 boost
```shell
wget https://sourceforge.net/projects/boost/files/boost/1.66.0/boost_1_66_0.tar.gz
......@@ -134,13 +135,15 @@ cd boost_1_66_0
```
boost 的安装路径可以根据实际情况指定,示例中的安装路径是 /usr/local/stonedb-boost。此步骤会在 /usr/local/stonedb-boost/lib 下生成如下目录和文件。
![libboost](./libboost.png)
![boost](./libboost.png)
:::info
注:在编译过程中,除非有关键字 "error" 报错自动退出,否则出现关键字 "warning"、"failed"是正常的。
在编译过程中,除非有关键字 "error" 报错自动退出,否则出现关键字 "warning"、"failed"是正常的,安装 boost 大概需要25分钟左右。
:::
## 第三步:执行编译
StoneDB 现有 5.6 和 5.7 两个分支,下载的源码包默认是 5.7 分支。下载的源码包存放路径可根据实际情况指定,示例中的源码包存放路径是在根目录下,并且是切换为 5.6 分支的编译安装。
注:gcc 9.3.0以上版本已经支持 5.6 的编译,并且支持自定义指定 rocksdb 和 marisa 的安装路径。5.7 的编译还是 gcc 7.3.0版本,后续会得到支持。
```shell
cd /
git clone https://github.com/stoneatom/stonedb.git
......@@ -168,8 +171,9 @@ install_target=/stonedb56/install
###执行编译脚本
sh stonedb_build.sh
```
注:如果是 CentOS/RedHat ,需要注释 os_dist 和 os_dist_release,并且修改 build_tag ,这是因为 "lsb_release -a" 返回的结果中,Distributor、Release、Codename 显示的是 n/a。注释 os_dist 和 os_dist_release 只会影响产生的日志名和 tar 包名,不会影响编译结果。
## 第四步:启动实例
按照以下步骤启动 StoneDB。
用户可按照手动安装和自动安装两种方式启动 StoneDB。
### 1. 创建用户
```shell
groupadd mysql
......@@ -209,9 +213,12 @@ chown -R mysql:mysql /data/stonedb56/install/my.cnf
cd /stonedb56/install
./reinstall.sh
```
注:reinstall.sh 与 install.sh 的区别?
reinstall.sh 是自动化安装脚本,执行脚本的过程是创建目录、初始化实例和启动实例的过程,只在第一次使用,其他任何时候使用都会删除整个目录,重新初始化数据库。install.sh 是手动安装提供的示例脚本,用户可根据自定义的安装目录修改路径,然后执行脚本,执行脚本的过程也是创建目录、初始化实例和启动实例。以上两个脚本都只能在第一次使用。
:::note
**注:reinstall.sh 与 install.sh 的区别?**
*reinstall.sh* 是自动化安装脚本,执行脚本的过程是创建目录、初始化实例和启动实例的过程,只在第一次使用,其他任何时候使用都会删除整个目录,重新初始化数据库。install.sh 是手动安装提供的示例脚本,用户可根据自定义的安装目录修改路径,然后执行脚本,执行脚本的过程也是创建目录、初始化实例和启动实例。以上两个脚本都只能在第一次使用。
:::
### 4. 执行登录
```shell
/stonedb56/install/bin/mysql -uroot -p -S /stonedb56/install/tmp/mysql.sock
......
......@@ -5,26 +5,37 @@ sidebar_position: 5.21
# 通过MySQL客户端连接StoneDB
本文主要介绍如何使用MySQL客户端连接StoneDB,包含前提条件、连接示例和参数说明。
本文主要介绍如何使用 MySQL 客户端连接 StoneDB,包含前提条件、连接示例和参数说明。
## 前提条件
- 本地已安装MySQL客户端,StoneDB当前版本支持的MySQL客户端版本包括 V5.5、V5.6 和V5.7;
- 环境变量PATH包含了MySQL客户端命令所在目录。
本地已安装 MySQL 客户端,StoneDB 当前版本支持的 MySQL 客户端版本包括 V5.5、V5.6 和 V5.7。
## 连接示例
1)打开命令行终端;
2)输入连接参数,格式请参见如下示例。
输入连接参数,格式请参见如下示例。
```shell
mysql -u -p -h -P -S -A
/stonedb/install/bin/mysql -uroot -p -S /stonedb/install/tmp/mysql.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.36-StoneDB-log build-
Copyright (c) 2000, 2022 StoneAtom Group Holding Limited
No entry for terminal type "xterm";
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
```
## 参数说明
主要参数含义如下:
- -h:连接StoneDB数据库的IP
- -u:连接StoneDB数据库的用户
- -h:连接 StoneDB 数据库的IP
- -u:连接 StoneDB 数据库的用户
- -p:用户的密码,为了安全可以不提供,可在提示符下输入,密码文本不可见
- -P:StoneDB 数据库的端口,默认是3306,可以自定义
- -P:StoneDB 数据库的端口,默认是 3306,可以自定义
- -A:指定连接的数据库名
- -S:使用sock文件连接StoneDB
- -S:使用 sock 文件连接 StoneDB
如果要退出 StoneDB 命令行,可以输入 exit 后按回车键。
如果要退出 StoneDB 命令行,可以输入exit后按回车键。
注:更多连接参数详见"mysql --help"的返回结果。
:::tip
更多连接参数详见 "mysql --help" 的返回结果。
:::
......@@ -6,6 +6,7 @@ sidebar_position: 5.22
# 通过Navicat连接StoneDB
Navicat是一套可创建多个连接的数据库管理工具,可以连接Oracle、MySQL、PostgreSQL等关系型数据库,它与StoneDB也是兼容的。Navicat的用户界面设计良好,功能能满足开发人员和DBA的需求。我们可以通过Navicat连接到StoneDB,然后可以对StoneDB进行创建、管理和维护。
以下是使用Navicat连接StoneDB的示例:
##### 打开Navicat,点击文件->新建连接->选择MySQL
![image.png](Navicat_step1.png)
......@@ -13,5 +14,8 @@ Navicat是一套可创建多个连接的数据库管理工具,可以连接Orac
![image.png](Navicat_step2.png)
##### 点击连接测试,如果返回连接成功,说明已成功连接StoneDB。
![image.png](Navicat_success.png)
注:超级管理员用户('root'@'localhost')是不能通过客户端连接服务端的。
:::info
超级管理员用户('root'@'localhost')是不能通过客户端连接服务端的。
:::
......@@ -6,6 +6,7 @@ sidebar_position: 5.54
# 创建和管理存储过程
创建一个存储过程,用于随机插入100万条数据。
创建表
```sql
CREATE TABLE t_user(
......@@ -16,8 +17,12 @@ CREATE TABLE t_user(
score INT NOT NULL,
copy_id INT NOT NULL,
PRIMARY KEY (`id`)
) engine=STONEDB;
) engine=stonedb;
```
:::tip
StoneDB 5.6 的存储引擎名是 stonedb,5.7 的存储引擎名是 tianmu。
:::
创建存储过程
```sql
DELIMITER //
......@@ -55,4 +60,4 @@ call add_user(1000000);
删除存储过程
```sql
drop PROCEDURE add_user;
```
```
\ No newline at end of file
......@@ -14,6 +14,10 @@ create table student(
birthday DATE
) engine=stonedb;
```
:::tip
StoneDB 5.6 的存储引擎名是 stonedb,5.7 的存储引擎名是 tianmu。
:::
查看表结构,可以使用以下SQL语句:
```sql
show create table student\G
......
......@@ -11,7 +11,7 @@ create view v_s as select name from teachers where age>18;
```
查看创建视图语句,可以使用以下SQL语句:
```sql
show create view v_s\G
show create view v_s
```
删除视图,例如:要删除视图v_s,可以使用以下SQL语句:
```sql
......
......@@ -5,22 +5,14 @@ sidebar_position: 5.62
# 设置参数
StoneDB的所有参数默认存放在/stonedb/install/stonedb.cnf,这个参数文件也可以包含其他存储引擎的参数。StoneDB的参数和其他存储引擎的参数在修改上是有区别的,StoneDB的参数只能静态修改,即只能编辑参数文件,然后重启StoneDB。
用户可根据自己的环境要求进行设置参数,以下是设置参数的演示。
## 指定存储引擎类型
存储引擎类型由参数default_storage_engine决定,这个参数可以在会话级和全局动态修改,但如果数据库实例重启,参数会失效,即参数变为原来的值。如果想参数永久生效,需要编辑参数文件,然后重启StoneDB。
```shell
# mysql -uroot -p -P3308
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 926
Server version: 5.7.36-StoneDB-log build-
Copyright (c) 2000, 2022 StoneAtom Group Holding Limited
No entry for terminal type "xterm";
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
StoneDB 的所有参数存放在 my.cnf,这个参数文件也可以包含其他存储引擎的参数。StoneDB 的参数和其他存储引擎的参数在修改上是一样的,即能静态修改,也能动态修改,不过动态修改的参数,重启 StoneDB 后会失效。
:::tip
注:StoneDB 5.6 的参数以 stonedb 前缀,5.7 的参数以 tianmu 前缀。
:::
# 指定默认存储引擎
存储引擎类型由参数 default_storage_engine 决定,这个参数可以在会话级或全局动态修改,但如果数据库实例重启,参数会失效,即参数变为原来的值。如果想参数永久生效,需要编辑参数文件,然后重启 StoneDB。
```shell
mysql> show variables like 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
......@@ -29,47 +21,30 @@ mysql> show variables like 'default_storage_engine';
+------------------------+--------+
1 row in set (0.00 sec)
mysql> set global default_storage_engine=StoneDB;
mysql> set global default_storage_engine=tianmu;
Query OK, 0 rows affected (0.00 sec)
mysql> exit
Bye
# mysql -uroot -p -P3308
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 927
Server version: 5.7.36-StoneDB-log build-
Copyright (c) 2000, 2022 StoneAtom Group Holding Limited
No entry for terminal type "xterm";
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
...
mysql> show variables like 'default_storage_engine';
+------------------------+---------+
| Variable_name | Value |
+------------------------+---------+
| default_storage_engine | STONEDB |
| default_storage_engine | TIANMU |
+------------------------+---------+
1 row in set (0.00 sec)
```
以上可知,数据库的默认存储引擎为MyISAM,在全局级修改后,数据库的默认存储引擎为STONEDB
以上可知,数据库的默认存储引擎为 MyISAM,在全局级修改后,数据库的默认存储引擎为 TIANMU
```sql
mysql> shutdown;
Query OK, 0 rows affected (0.00 sec)
mysql> exit
Bye
# mysql -uroot -p -P3308
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36-StoneDB-log build-
Copyright (c) 2000, 2022 StoneAtom Group Holding Limited
No entry for terminal type "xterm";
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
...
mysql> show variables like 'default_storage_engine';
+------------------------+--------+
......@@ -79,44 +54,32 @@ mysql> show variables like 'default_storage_engine';
+------------------------+--------+
1 row in set (0.00 sec)
```
以上可知,数据库实例重启后,参数default_storage_engine变回MyISAM,如果想参数永久生效,需要编辑参数文件,然后重启StoneDB。以下是演示介绍静态修改参数。
## 指定insert buffer大小
StoneDB的参数只支持静态修改,修改完成后,需要重启StoneDB。
以上可知,数据库实例重启后,参数 default_storage_engine 变回 MyISAM,如果想参数永久生效,需要编辑参数文件,然后重启 StoneDB。
# 指定 insert buffer 大小
```sql
mysql> show variables like 'stonedb_insert_buffer_size';
mysql> show variables like 'tianmu_insert_buffer_size';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| stonedb_insert_buffer_size | 512 |
| tianmu_insert_buffer_size | 512 |
+----------------------------+-------+
1 row in set (0.00 sec)
# vi /stonedb/install/stonedb.cnf
stonedb_insert_buffer_size = 1024
# vi /stonedb/install/my.cnf
tianmu_insert_buffer_size = 1024
mysql> shutdown;
Query OK, 0 rows affected (0.00 sec)
# /stonedb/install//bin/mysqld_safe --datadir=/stonedb/install/data/ --user=mysql &
# mysql -uroot -p -P3308
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36-StoneDB-log build-
Copyright (c) 2000, 2022 StoneAtom Group Holding Limited
No entry for terminal type "xterm";
using dumb terminal settings.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
...
mysql> show variables like 'stonedb_insert_buffer_size';
mysql> show variables like 'tianmu_insert_buffer_size';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| stonedb_insert_buffer_size | 1024 |
| tianmu_insert_buffer_size | 1024 |
+----------------------------+-------+
1 row in set (0.00 sec)
```
以上可知,参数stonedb_insert_buffer_size已经修改为1024MB。
以上可知,参数 tianmu_insert_buffer_size 已经修改为 1024MB。
\ No newline at end of file
......@@ -7,8 +7,8 @@ sidebar_position: 5.63
| **错误码** | **含义** |
| --- | --- |
| ERROR 2233 (HY000): Be disgraceful to storage engine, operating is forbidden! | 不支持的DDL |
| ERROR 1031 (HY000): Table storage engine for 'xxx' doesn't have this option | 不支持的DML |
| ERROR 1040 (HY000): Too many connections | 连接数超过参数max_connections |
| ERROR 2233 (HY000): Be disgraceful to storage engine, operating is forbidden! | 不支持的 DDL |
| ERROR 1031 (HY000): Table storage engine for 'xxx' doesn't have this option | 不支持的 DML |
| ERROR 1040 (HY000): Too many connections | 连接数超过参数 max_connections |
| ERROR 1045 (28000): Access denied for user 'u_test'@'%' (using password: YES) | 用户名密码错误或者权限不足 |
......@@ -5,7 +5,7 @@ sidebar_position: 5.3
# DML语句
StoneDB只支持以下DML操作,其它没有说明的表示不支持。
StoneDB 只支持以下DML操作,其它没有说明的表示不支持。
```sql
CREATE TABLE t_test(
id INT NOT NULL AUTO_INCREMENT,
......@@ -15,23 +15,29 @@ CREATE TABLE t_test(
score INT NOT NULL,
copy_id INT NOT NULL,
PRIMARY KEY (`id`)
) engine=STONEDB;
) engine=stonedb;
```
## insert
:::info
- StoneDB 5.6 的存储引擎名是 stonedb;
- StoneDB 5.7 的存储引擎名是 tianmu。
:::
# insert
```sql
insert into t_test values(1,'jack','rose','0',58,1);
```
## update
# update
```sql
update t_test set score=200 where id=1;
```
## insert into select
# insert into select
```sql
create table t_test2 like t_test;
insert into t_test2 select * from t_test;
```
## insert into on duplicate key update
# insert into on duplicate key update
```sql
insert into t_test1 values(1,'Bond','Jason','1',47,10) on duplicate key update last_name='James';
```
注:语义的逻辑是插入一行数据,如果碰到主键约束或者唯一约束冲突,就执行后面的更新语句。
:::tip
语义的逻辑是插入一行数据,如果碰到主键约束或者唯一约束冲突,就执行后面的更新语句。
:::
......@@ -80,4 +80,3 @@ select * from t_test1 A where exists (select 1 from t_test2 B where B.id = A.id)
......@@ -72,10 +72,14 @@ sidebar_position: 6.1
+--------------------------+----------------------------------+
8 rows in set (0.01 sec)
```
1)客户端向服务端发送请求使用的字符集:取决于操作系统的字符集,LC_ALL、LC_CTYPE、LANG决定了操作系统使用的字符集,优先级依次从高到底
2)服务端接收客户端请求使用的字符集:取决于变量character_set_client
3)服务端在运行过程中转换使用的字符集:取决于变量character_set_connection
4)服务端向客户端返回请求使用的字符集:取决于变量character_set_results
5)客户端接收服务端响应使用的字符集:取决于操作系统的字符集
假设服务端使用的字符集为utf8,客户端启动时使用了--default-character-set=gbk选项,那么character_set_client、character_set_connection、character_set_results都会被设置为gbk。假如某张表中存储了中文,那么服务端和客户端这些中文的字符编码是不同的,在Unix操作系统中服务端会根据character_set_connection进行转换,很显然客户端收到的结果是乱码。
StoneDB默认的字符集取决于变量character_set_server,如果创建的数据库没有指定字符集,默认的字符集就是变量character_set_server的值。如果创建的表没有指定字符集,继承的是数据库的字符集。StoneDB的表一旦被创建后,字符集就不能被修改和转换。
1. 客户端向服务端发送请求使用的字符集:取决于操作系统的字符集,LC_ALL、LC_CTYPE、LANG 决定了操作系统使用的字符集,优先级依次从高到底。
2. 服务端接收客户端请求使用的字符集:取决于变量 character_set_client
3. 服务端在运行过程中转换使用的字符集:取决于变量 character_set_connection
4. 服务端向客户端返回请求使用的字符集:取决于变量 character_set_results
5. 客户端接收服务端响应使用的字符集:取决于操作系统的字符集
假设服务端使用的字符集为 utf8,客户端启动时使用了 --default-character-set=gbk 选项,那么character_set_client、character_set_connection、character_set_results 都会被设置为 gbk。假如某张表中存储了中文,那么服务端和客户端这些中文的字符编码是不同的,在Unix操作系统中服务端会根据character_set_connection 进行转换,很显然客户端收到的结果是乱码。
StoneDB 默认的字符集取决于变量 character_set_server,如果创建的数据库没有指定字符集,默认的字符集就是变量 character_set_server 的值。如果创建的表没有指定字符集,继承的是数据库的字符集。StoneDB 的表一旦被创建后,字符集就不能被修改和转换。
......@@ -5,7 +5,7 @@ sidebar_position: 6.2
# 数据类型
StoneDB支持如下的数据类型。
StoneDB 支持如下的数据类型。
| 类型 | 成员 |
| --- | --- |
......@@ -16,7 +16,7 @@ StoneDB支持如下的数据类型。
| 字符串型 | CHAR、VARCHAR、TINYTEXT、TEXT、MEDIUMTEXT、LONGTEXT |
| 二进制字符串型 | BINARY、VARBINARY、TINYBLOB、BLOB、MEDIUMBLOB、LONGBLOB |
StoneDB创建表时不支持使用关键字unsigned、zerofill,各整型的取值范围如下。
StoneDB 创建表时不支持使用关键字 unsigned、zerofill,各整型的取值范围如下。
| 类型 | 字节 | 最小值 | 最大值 |
| --- | --- | --- | --- |
......@@ -26,8 +26,9 @@ StoneDB支持如下的数据类型。
| INT | 4 | -2147483647 | 2147483647 |
| BIGINT | 8 | -9223372036854775808 | 9223372036854775807 |
DECIMAL精度必须小于或等于18,否则不支持,如decimal(19)就会报错。DECIMAL(6, 2)表示整数部分和小数部分最大有效位数分别为4和2,所以值域为[-9999.99, 9999.99]。
不同的字符集,即使长度相同,但占用的存储空间不同,以下是以字符集为latin1的字符串类型的大小范围。
DECIMAL 精度必须小于或等于18,否则不支持,如 decimal(19) 就会报错。DECIMAL(6, 2) 表示整数部分和小数部分最大有效位数分别为4和2,所以值域为 [-9999.99, 9999.99]。
不同的字符集,即使长度相同,但占用的存储空间不同,以下是以字符集为 latin1 的字符串类型的大小范围。
| 类型 | 大小 |
| --- | --- |
......@@ -46,5 +47,4 @@ DECIMAL精度必须小于或等于18,否则不支持,如decimal(19)就会报
| TIME | HH:MM:SS | -838:59:59 | 838:59:59 |
| DATE | YYYY-MM-DD | 0001-01-01 | 9999-12-31 |
| DATETIME | YYYY-MM-DD HH:MM:SS | 0001-01-01 00:00:00 | 9999-12-31 23:59:59 |
| TIMESTAMP | YYYY-MM-DD HH:MM:SS | 1970-01-01 08:00:01 | 2038-01-19 11:14:07 |
| TIMESTAMP | YYYY-MM-DD HH:MM:SS | 1970-01-01 08:00:01 | 2038-01-19 11:14:07 |
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册