未验证 提交 e12438aa 编写于 作者: C chris-sun-star 提交者: GitHub

add ob configserver (#943)

上级 bd06e657
......@@ -116,3 +116,8 @@ tools/deploy/distributed.yaml
tools/deploy/obd_profile.sh
tools/deploy/single-with-proxy.yaml
tools/deploy/single.yaml
###### binary and test logs
tools/ob-configserver/bin/*
tools/ob-configserver/tests/*.log
tools/ob-configserver/tests/*.out
Legal Disclaimer
Within this source code, the comments in Chinese shall be the original, governing version. Any comment in other languages are for reference only. In the event of any conflict between the Chinese language version comments and other language version comments, the Chinese language version shall prevail.
法律免责声明
关于代码注释部分,中文注释为官方版本,其它语言注释仅做参考。中文注释可能与其它语言注释存在不一致,当中文注释与其它语言注释存在不一致时,请以中文注释为准。
\ No newline at end of file
include Makefile.common
.PHONY: all test clean build configserver
default: clean fmt build
build: build-debug
build-debug: set-debug-flags configserver
build-release: set-release-flags configserver
set-debug-flags:
@echo Build with debug flags
$(eval LDFLAGS += $(LDFLAGS_DEBUG))
set-release-flags:
@echo Build with release flags
$(eval LDFLAGS += -s -w)
$(eval LDFLAGS += $(LDFLAGS_RELEASE))
configserver:
$(GOBUILD) $(GO_RACE_FLAG) -ldflags '$(OB_CONFIGSERVER_LDFLAGS)' -o bin/ob-configserver cmd/main.go
test:
$(GOTEST) $(GOTEST_PACKAGES)
fmt:
@gofmt -s -w $(filter-out , $(GOFILES))
fmt-check:
@if [ -z "$(UNFMT_FILES)" ]; then \
echo "gofmt check passed"; \
exit 0; \
else \
echo "gofmt check failed, not formatted files:"; \
echo "$(UNFMT_FILES)" | tr -s " " "\n"; \
exit 1; \
fi
tidy:
$(GO) mod tidy
vet:
go vet $$(go list ./...)
clean:
rm -rf $(GOCOVERAGE_FILE)
rm -rf tests/mock/*
rm -rf bin/ob-configserver
$(GO) clean -i ./...
PROJECT=ob-configserver
PROCESSOR=2
VERSION=1.0
PWD ?= $(shell pwd)
GO := GO111MODULE=on GOPROXY=https://mirrors.aliyun.com/goproxy/,direct go
BUILD_FLAG := -p $(PROCESSOR)
GOBUILD := $(GO) build $(BUILD_FLAG)
GOBUILDCOVERAGE := $(GO) test -covermode=count -coverpkg="../..." -c .
GOCOVERAGE_FILE := tests/coverage.out
GOCOVERAGE_REPORT := tests/coverage-report
GOTEST := OB_CONFIGSERVER_CONFIG_PATH=$(PWD) $(GO) test -tags test -covermode=count -coverprofile=$(GOCOVERAGE_FILE) -p $(PROCESSOR)
GO_RACE_FLAG =-race
LDFLAGS += -X "github.com/oceanbase/configserver/config.Version=${VERSION}"
LDFLAGS += -X "github.com/oceanbase/configserver/config.BuildTimestamp=$(shell date -u '+%Y-%m-%d %H:%M:%S')"
LDFLAGS += -X "github.com/oceanbase/configserver/config.GitBranch=$(shell git rev-parse --abbrev-ref HEAD)"
LDFLAGS += -X "github.com/oceanbase/configserver/config.GitHash=$(shell git rev-parse HEAD)"
LDFLAGS_DEBUG = -X "github.com/oceanbase/configserver/config.Mode=debug"
LDFLAGS_RELEASE = -X "github.com/oceanbase/configserver/config.Mode=release"
OB_CONFIGSERVER_LDFLAGS = $(LDFLAGS) -X "github.com/oceanbase/configserver/config.CurProcess=ob-configserver"
GOFILES ?= $(shell git ls-files '*.go')
GOTEST_PACKAGES = $(shell go list ./... | grep -v -f tests/excludes.txt)
UNFMT_FILES ?= $(shell gofmt -l -s $(filter-out , $(GOFILES)))
# ob-configserver
## What is ob-configserver
Ob-configserver is a web application provides oceanbase metadata storage and query.
## How to build
To build ob-configserver requires go 1.16 or above
### build binary
You can build ob-configserver using the commands list below
```bash
# build debug version
make build
# build release version
make build-release
```
You will get the compiled binary file in folder bin
### build rpm
You can build a rpm package using the following command
```
cd {project_home}/rpm
bash ob-configserver-build.sh {project_home} ob-configserver 1
```
## How to run ob-configserver
### run binary directly
* copy the config.yaml file from etc/config.yaml and modify it to match the real environment
* start ob-configserver with the following command
```bash
bin/ob-configserver -c path_to_config_file
```
### install rpm package
* install rpm package
```bash
rpm -ivh ob-configserver-xxx-x.el7.rpm
```
after installation, the directory looks like this
```bash
.
├── bin
│   └── ob-configserver
├── conf
│   └── config.yaml
├── log
└── run
```
* modify config file
* start ob-configserver
```bash
bin/ob-configserver -c conf/config.yaml
```
## How to use ob-configserver
### config oceanbase to use ob-configserver
* config ob-configserver when observer startup
```bash
add obconfig_url='http://{vip_address}:{vip_port}/services?Action=ObRootServiceInfo&ObCluster={ob_cluster_name}' in start command, specify with -o
```
* config ob-configserver when observer already starts using sql
```sql
# run the following sql using root user in tenant sys
alter system set obconfig_url = 'http://{vip_address}:{vip_port}/services?Action=ObRootServiceInfo&ObCluster={ob_cluster_name}'
```
### config obproxy to use ob-configserver
* config ob-configserver when obproxy startup
```bash
add obproxy_config_server_url='http://{vip_address}:{vip_port}/services?Action=GetObProxyConfig' in start command specify with -o
```
* config ob-configserver when obproxy already starts using sql
```sql
# run the following sql using root@proxysys
alter proxyconfig set obproxy_config_server_url='http://{vip_address}:{vip_port}/services?Action=GetObProxyConfig'
```
## API reference
[api reference](doc/api_reference.md)
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package main
import (
"os"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/oceanbase/configserver/config"
"github.com/oceanbase/configserver/logger"
"github.com/oceanbase/configserver/server"
)
var (
configserverCommand = &cobra.Command{
Use: "configserver",
Short: "configserver is used to store and query ob rs_list",
Long: "configserver is used to store and query ob rs_list, used by observer, obproxy and other tools",
Run: func(cmd *cobra.Command, args []string) {
err := runConfigServer()
if err != nil {
log.WithField("args:", args).Errorf("start configserver failed: %v", err)
}
},
}
)
func init() {
configserverCommand.PersistentFlags().StringP("config", "c", "etc/config.yaml", "config file")
_ = viper.BindPFlag("config", configserverCommand.PersistentFlags().Lookup("config"))
}
func main() {
if err := configserverCommand.Execute(); err != nil {
log.WithField("args", os.Args).Errorf("configserver execute failed %v", err)
}
}
func runConfigServer() error {
configFilePath := viper.GetString("config")
configServerConfig, err := config.ParseConfigServerConfig(configFilePath)
if err != nil {
return errors.Wrap(err, "read and parse configserver config")
}
// init logger
logger.InitLogger(logger.LoggerConfig{
Level: configServerConfig.Log.Level,
Filename: configServerConfig.Log.Filename,
MaxSize: configServerConfig.Log.MaxSize,
MaxAge: configServerConfig.Log.MaxAge,
MaxBackups: configServerConfig.Log.MaxBackups,
LocalTime: configServerConfig.Log.LocalTime,
Compress: configServerConfig.Log.Compress,
})
// init config server
configServer := server.NewConfigServer(configServerConfig)
err = configServer.Run()
if err != nil {
return errors.Wrap(err, "start config server")
}
return nil
}
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package config
var (
CurProcess string
Version string
Mode string
)
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package config
import (
"bytes"
"io/ioutil"
"os"
"gopkg.in/yaml.v3"
)
type ConfigServerConfig struct {
Log *LogConfig `yaml:"log"`
Server *ServerConfig `yaml:"server"`
Storage *StorageConfig `yaml:"storage"`
Vip *VipConfig `yaml:"vip"`
}
func ParseConfigServerConfig(configFilePath string) (*ConfigServerConfig, error) {
_, err := os.Stat(configFilePath)
if err != nil {
return nil, err
}
content, err := ioutil.ReadFile(configFilePath)
if err != nil {
return nil, err
}
config := new(ConfigServerConfig)
err = yaml.NewDecoder(bytes.NewReader(content)).Decode(config)
if err != nil {
return nil, err
}
return config, nil
}
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package config
type LogConfig struct {
Level string `yaml:"level"`
Filename string `yaml:"filename"`
MaxSize int `yaml:"maxsize"`
MaxAge int `yaml:"maxage"`
MaxBackups int `yaml:"maxbackups"`
LocalTime bool `yaml:"localtime"`
Compress bool `yaml:"compress"`
}
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package config
type ServerConfig struct {
Address string `yaml:"address"`
RunDir string `yaml:"run_dir"`
}
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package config
type StorageConfig struct {
DatabaseType string `yaml:"database_type"`
ConnectionUrl string `yaml:"connection_url"`
}
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package config
type VipConfig struct {
Address string `yaml:"address"`
Port int `yaml:"port"`
}
# ob-configserver api refenence
For compatibility consideration, ob-configserver uses parameter `Action` to distinguish different type of requests
## Register OceanBase rootservice list
- request url: http://{vip_address}:{vip_port}/services
- request method: POST
- request parameters:
| name | type | required | typical value | description |
| --- | --- | --- | --- | --- |
| Action | String | Yes | ObRootServiceInfo | |
| ObCluster | String | No | obcluster | ob cluster name |
| ObClusterId | int64 | No | 1 | ob cluster id |
| ObRegion | String | No | obcluster | ob cluster name, old format |
| ObRegionId | int64 | No | 1 | ob cluster id, old format |
| version | int | No | 1 | version supports 1 or 2, 2 means with standby ob cluster support |
request body:
```json
{
"ObClusterId": 1,
"ObRegionId": 1,
"ObCluster": "obcluster",
"ObRegion": "obcluster",
"ReadonlyRsList": [],
"RsList": [{
"address": "1.1.1.1:2882",
"role": "LEADER",
"sql_port": 2881
}],
"Type": "PRIMARY",
"timestamp": 1652419587417171
}
```
- response example:
```json
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": "successful",
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
```
## Query Oceanbase rootservice list
- request url: http://{vip_address}:{vip_port}/services
- request method: GET
- request parameters:
| name | type | required | typical value | description |
| --- | --- | --- | --- | --- |
| Action | String | Yes | ObRootServiceInfo | |
| ObCluster | String | No | obcluster | ob cluster name |
| ObClusterId | int64 | No | 1 | ob cluster id |
| ObRegion | String | No | obcluster | ob cluster name, old format |
| ObRegionId | int64 | No | 1 | ob cluster id, old format |
| version | int | No | 1 | version supports 1 or 2, 2 means with standby ob cluster support |
- response example:
```json
# return one item, when version=1 or version=2 and ObClusterId specified
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": {
"ObClusterId": 1,
"ObRegionId": 1,
"ObCluster": "obcluster",
"ObRegion": "obcluster",
"ReadonlyRsList": [],
"RsList": [{
"address": "1.1.1.1:2882",
"role": "LEADER",
"sql_port": 2881
}],
"Type": "PRIMARY",
"timestamp": 1652419587417171
},
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
# return a list when version=2 and ObClusterId not specified
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": [{
"ObClusterId": 1,
"ObRegionId": 1,
"ObCluster": "obcluster",
"ObRegion": "obcluster",
"ReadonlyRsList": [],
"RsList": [{
"address": "1.1.1.1:2882",
"role": "LEADER",
"sql_port": 2881
}],
"Type": "PRIMARY",
"timestamp": 1652419587417171
}, {
"ObClusterId": 2,
"ObRegionId": 2,
"ObCluster": "obcluster",
"ObRegion": "obcluster",
"ReadonlyRsList": [],
"RsList": [{
"address": "2.2.2.2:2882",
"role": "LEADER",
"sql_port": 2881
}],
"Type": "STANDBY",
"timestamp": 1652436572067984
}],
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
```
## Delete OceanBase rootservice info
- request url: http://{vip_address}:{vip_port}/services
- request method: DELETE
- request parameters:
| name | type | required | typical value | description |
| --- | --- | --- | --- | --- |
| Action | String | Yes | ObRootServiceInfo | |
| ObCluster | String | No | obcluster | ob cluster name |
| ObClusterId | int64 | No | 1 | ob cluster id |
| ObRegion | String | No | obcluster | ob cluster name, old format |
| ObRegionId | int64 | No | 1 | ob cluster id, old format |
| version | int | No | 1 | only version=2 is supported, and don't forget to specify ObClusterId |
- response example:
```json
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": "successful",
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
```
## Query rootservice info of all OceanBase clusters
- request url: http://{vip_address}:{vip_port}/services
- request method: GET/POST
- request parameters:
| name | type | required | typical value | description |
| --- | --- | --- | --- | --- |
| Action | String | Yes | GetObProxyConfig | |
| VersionOnly | Boolean | No | false | only return version |
- response example:
```json
# return all info
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": {
"ObProxyBinUrl": "http://1.1.1.1:8080/client?Action=GetObProxy",
"ObProxyDatabaseInfo": {
"DataBase": "***",
"MetaDataBase": "http://1.1.1.1:8080/services?Action=ObRootServiceInfo&User_ID=alibaba&UID=admin&ObRegion=obdv1",
"Password": "***",
"User": "***"
},
"ObRootServiceInfoUrlList": [{
"ObRegion": "obcluster",
"ObRootServiceInfoUrl": "http://1.1.1.1:8080/services?Action=ObRootServiceInfo&ObCluster=obcluster"
}],
"Version": "07c5563d293278097dc84e6b64ef6341"
},
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
# return version only
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": {
"Version": "07c5563d293278097dc84e6b64ef6341"
},
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
```
## Query rootservice info of all OceanBase clusters in template format
- request url: http://{vip_address}:{vip_port}/services
- request method: GET/POST
- request parameters:
| name | type | required | typical value | description |
| --- | --- | --- | --- | --- |
| Action | String | Yes | GetObRootServiceInfoUrlTemplate | |
| VersionOnly | Boolean | No | false | only return version |
- response example:
```json
# return all info
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": {
"ObProxyBinUrl": "http://1.1.1.1:8080/client?Action=GetObProxy",
"ObProxyDatabaseInfo": {
"DataBase": "***",
"MetaDataBase": "http://1.1.1.1:8080/services?Action=ObRootServiceInfo&User_ID=alibaba&UID=admin&ObRegion=obdv1",
"Password": "***",
"User": "***"
},
"Version": "b34e6381994003c5d758890ededb82a4",
"ObClusterList": ["obcluster"],
"ObRootServiceInfoUrlTemplate": "http://1.1.1.1:8080/services?Action=ObRootServiceInfo&ObRegion=${ObRegion}",
"ObRootServiceInfoUrlTemplateV2": "http://1.1.1.1:8080/services?Action=ObRootServiceInfo&version=2&ObCluster=${ObCluster}&ObClusterId=${OBClusterId}"
},
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
# versiononly
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": {
"Version": "b34e6381994003c5d758890ededb82a4"
},
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
```
## Query idc and region info (empty implementation, just for compatibility)
- request url: http://{vip_address}:{vip_port}/services
- request method: GET/POST
- request parameters:
| name | type | required | typical value | description |
| --- | --- | --- | --- | --- |
| Action | String | Yes | ObIDCRegionInfo | |
| ObCluster | String | No | obcluster | ob cluster name |
| ObClusterId | int64 | No | 1 | ob cluster id |
| ObRegion | String | No | obcluster | ob cluster name, old format |
| ObRegionId | int64 | No | 1 | ob cluster id, old format |
| version | int | No | 1 | version supports 1 or 2, 2 means with standby ob cluster support |
- response example:
```json
# return single item when version=1 or version=2 and ObClusterId specified
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": {
"ObRegion": "obcluster",
"ObRegionId": 2,
"IDCList": [],
"ReadonlyRsList": ""
},
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
# return a list when version=2 and ObClusterId not specified
{
"Code": 200,
"Message": "successful",
"Success": true,
"Data": [{
"ObRegion": "obcluster",
"ObRegionId": 1,
"IDCList": [],
"ReadonlyRsList": ""
}, {
"ObRegion": "obcluster",
"ObRegionId": 2,
"IDCList": [],
"ReadonlyRsList": ""
}],
"Trace": "xxxx",
"Server": "1.1.1.1",
"Cost": 1
}
```
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"fmt"
"log"
"github.com/oceanbase/configserver/ent/migrate"
"github.com/oceanbase/configserver/ent/obcluster"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql"
)
// Client is the client that holds all ent builders.
type Client struct {
config
// Schema is the client for creating, migrating and dropping schema.
Schema *migrate.Schema
// ObCluster is the client for interacting with the ObCluster builders.
ObCluster *ObClusterClient
}
// NewClient creates a new client configured with the given options.
func NewClient(opts ...Option) *Client {
cfg := config{log: log.Println, hooks: &hooks{}}
cfg.options(opts...)
client := &Client{config: cfg}
client.init()
return client
}
func (c *Client) init() {
c.Schema = migrate.NewSchema(c.driver)
c.ObCluster = NewObClusterClient(c.config)
}
// Open opens a database/sql.DB specified by the driver name and
// the data source name, and returns a new client attached to it.
// Optional parameters can be added for configuring the client.
func Open(driverName, dataSourceName string, options ...Option) (*Client, error) {
switch driverName {
case dialect.MySQL, dialect.Postgres, dialect.SQLite:
drv, err := sql.Open(driverName, dataSourceName)
if err != nil {
return nil, err
}
return NewClient(append(options, Driver(drv))...), nil
default:
return nil, fmt.Errorf("unsupported driver: %q", driverName)
}
}
// Tx returns a new transactional client. The provided context
// is used until the transaction is committed or rolled back.
func (c *Client) Tx(ctx context.Context) (*Tx, error) {
if _, ok := c.driver.(*txDriver); ok {
return nil, fmt.Errorf("ent: cannot start a transaction within a transaction")
}
tx, err := newTx(ctx, c.driver)
if err != nil {
return nil, fmt.Errorf("ent: starting a transaction: %w", err)
}
cfg := c.config
cfg.driver = tx
return &Tx{
ctx: ctx,
config: cfg,
ObCluster: NewObClusterClient(cfg),
}, nil
}
// BeginTx returns a transactional client with specified options.
func (c *Client) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error) {
if _, ok := c.driver.(*txDriver); ok {
return nil, fmt.Errorf("ent: cannot start a transaction within a transaction")
}
tx, err := c.driver.(interface {
BeginTx(context.Context, *sql.TxOptions) (dialect.Tx, error)
}).BeginTx(ctx, opts)
if err != nil {
return nil, fmt.Errorf("ent: starting a transaction: %w", err)
}
cfg := c.config
cfg.driver = &txDriver{tx: tx, drv: c.driver}
return &Tx{
ctx: ctx,
config: cfg,
ObCluster: NewObClusterClient(cfg),
}, nil
}
// Debug returns a new debug-client. It's used to get verbose logging on specific operations.
//
// client.Debug().
// ObCluster.
// Query().
// Count(ctx)
//
func (c *Client) Debug() *Client {
if c.debug {
return c
}
cfg := c.config
cfg.driver = dialect.Debug(c.driver, c.log)
client := &Client{config: cfg}
client.init()
return client
}
// Close closes the database connection and prevents new queries from starting.
func (c *Client) Close() error {
return c.driver.Close()
}
// Use adds the mutation hooks to all the entity clients.
// In order to add hooks to a specific client, call: `client.Node.Use(...)`.
func (c *Client) Use(hooks ...Hook) {
c.ObCluster.Use(hooks...)
}
// ObClusterClient is a client for the ObCluster schema.
type ObClusterClient struct {
config
}
// NewObClusterClient returns a client for the ObCluster from the given config.
func NewObClusterClient(c config) *ObClusterClient {
return &ObClusterClient{config: c}
}
// Use adds a list of mutation hooks to the hooks stack.
// A call to `Use(f, g, h)` equals to `obcluster.Hooks(f(g(h())))`.
func (c *ObClusterClient) Use(hooks ...Hook) {
c.hooks.ObCluster = append(c.hooks.ObCluster, hooks...)
}
// Create returns a create builder for ObCluster.
func (c *ObClusterClient) Create() *ObClusterCreate {
mutation := newObClusterMutation(c.config, OpCreate)
return &ObClusterCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// CreateBulk returns a builder for creating a bulk of ObCluster entities.
func (c *ObClusterClient) CreateBulk(builders ...*ObClusterCreate) *ObClusterCreateBulk {
return &ObClusterCreateBulk{config: c.config, builders: builders}
}
// Update returns an update builder for ObCluster.
func (c *ObClusterClient) Update() *ObClusterUpdate {
mutation := newObClusterMutation(c.config, OpUpdate)
return &ObClusterUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOne returns an update builder for the given entity.
func (c *ObClusterClient) UpdateOne(oc *ObCluster) *ObClusterUpdateOne {
mutation := newObClusterMutation(c.config, OpUpdateOne, withObCluster(oc))
return &ObClusterUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOneID returns an update builder for the given id.
func (c *ObClusterClient) UpdateOneID(id int) *ObClusterUpdateOne {
mutation := newObClusterMutation(c.config, OpUpdateOne, withObClusterID(id))
return &ObClusterUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// Delete returns a delete builder for ObCluster.
func (c *ObClusterClient) Delete() *ObClusterDelete {
mutation := newObClusterMutation(c.config, OpDelete)
return &ObClusterDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// DeleteOne returns a delete builder for the given entity.
func (c *ObClusterClient) DeleteOne(oc *ObCluster) *ObClusterDeleteOne {
return c.DeleteOneID(oc.ID)
}
// DeleteOneID returns a delete builder for the given id.
func (c *ObClusterClient) DeleteOneID(id int) *ObClusterDeleteOne {
builder := c.Delete().Where(obcluster.ID(id))
builder.mutation.id = &id
builder.mutation.op = OpDeleteOne
return &ObClusterDeleteOne{builder}
}
// Query returns a query builder for ObCluster.
func (c *ObClusterClient) Query() *ObClusterQuery {
return &ObClusterQuery{
config: c.config,
}
}
// Get returns a ObCluster entity by its id.
func (c *ObClusterClient) Get(ctx context.Context, id int) (*ObCluster, error) {
return c.Query().Where(obcluster.ID(id)).Only(ctx)
}
// GetX is like Get, but panics if an error occurs.
func (c *ObClusterClient) GetX(ctx context.Context, id int) *ObCluster {
obj, err := c.Get(ctx, id)
if err != nil {
panic(err)
}
return obj
}
// Hooks returns the client hooks.
func (c *ObClusterClient) Hooks() []Hook {
return c.hooks.ObCluster
}
// Code generated by entc, DO NOT EDIT.
package ent
import (
"entgo.io/ent"
"entgo.io/ent/dialect"
)
// Option function to configure the client.
type Option func(*config)
// Config is the configuration for the client and its builder.
type config struct {
// driver used for executing database requests.
driver dialect.Driver
// debug enable a debug logging.
debug bool
// log used for logging on debug mode.
log func(...interface{})
// hooks to execute on mutations.
hooks *hooks
}
// hooks per client, for fast access.
type hooks struct {
ObCluster []ent.Hook
}
// Options applies the options on the config object.
func (c *config) options(opts ...Option) {
for _, opt := range opts {
opt(c)
}
if c.debug {
c.driver = dialect.Debug(c.driver, c.log)
}
}
// Debug enables debug logging on the ent.Driver.
func Debug() Option {
return func(c *config) {
c.debug = true
}
}
// Log sets the logging function for debug mode.
func Log(fn func(...interface{})) Option {
return func(c *config) {
c.log = fn
}
}
// Driver configures the client driver.
func Driver(driver dialect.Driver) Option {
return func(c *config) {
c.driver = driver
}
}
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
)
type clientCtxKey struct{}
// FromContext returns a Client stored inside a context, or nil if there isn't one.
func FromContext(ctx context.Context) *Client {
c, _ := ctx.Value(clientCtxKey{}).(*Client)
return c
}
// NewContext returns a new context with the given Client attached.
func NewContext(parent context.Context, c *Client) context.Context {
return context.WithValue(parent, clientCtxKey{}, c)
}
type txCtxKey struct{}
// TxFromContext returns a Tx stored inside a context, or nil if there isn't one.
func TxFromContext(ctx context.Context) *Tx {
tx, _ := ctx.Value(txCtxKey{}).(*Tx)
return tx
}
// NewTxContext returns a new context with the given Tx attached.
func NewTxContext(parent context.Context, tx *Tx) context.Context {
return context.WithValue(parent, txCtxKey{}, tx)
}
// Code generated by entc, DO NOT EDIT.
package ent
import (
"errors"
"fmt"
"entgo.io/ent"
"entgo.io/ent/dialect/sql"
"github.com/oceanbase/configserver/ent/obcluster"
)
// ent aliases to avoid import conflicts in user's code.
type (
Op = ent.Op
Hook = ent.Hook
Value = ent.Value
Query = ent.Query
Policy = ent.Policy
Mutator = ent.Mutator
Mutation = ent.Mutation
MutateFunc = ent.MutateFunc
)
// OrderFunc applies an ordering on the sql selector.
type OrderFunc func(*sql.Selector)
// columnChecker returns a function indicates if the column exists in the given column.
func columnChecker(table string) func(string) error {
checks := map[string]func(string) bool{
obcluster.Table: obcluster.ValidColumn,
}
check, ok := checks[table]
if !ok {
return func(string) error {
return fmt.Errorf("unknown table %q", table)
}
}
return func(column string) error {
if !check(column) {
return fmt.Errorf("unknown column %q for table %q", column, table)
}
return nil
}
}
// Asc applies the given fields in ASC order.
func Asc(fields ...string) OrderFunc {
return func(s *sql.Selector) {
check := columnChecker(s.TableName())
for _, f := range fields {
if err := check(f); err != nil {
s.AddError(&ValidationError{Name: f, err: fmt.Errorf("ent: %w", err)})
}
s.OrderBy(sql.Asc(s.C(f)))
}
}
}
// Desc applies the given fields in DESC order.
func Desc(fields ...string) OrderFunc {
return func(s *sql.Selector) {
check := columnChecker(s.TableName())
for _, f := range fields {
if err := check(f); err != nil {
s.AddError(&ValidationError{Name: f, err: fmt.Errorf("ent: %w", err)})
}
s.OrderBy(sql.Desc(s.C(f)))
}
}
}
// AggregateFunc applies an aggregation step on the group-by traversal/selector.
type AggregateFunc func(*sql.Selector) string
// As is a pseudo aggregation function for renaming another other functions with custom names. For example:
//
// GroupBy(field1, field2).
// Aggregate(ent.As(ent.Sum(field1), "sum_field1"), (ent.As(ent.Sum(field2), "sum_field2")).
// Scan(ctx, &v)
//
func As(fn AggregateFunc, end string) AggregateFunc {
return func(s *sql.Selector) string {
return sql.As(fn(s), end)
}
}
// Count applies the "count" aggregation function on each group.
func Count() AggregateFunc {
return func(s *sql.Selector) string {
return sql.Count("*")
}
}
// Max applies the "max" aggregation function on the given field of each group.
func Max(field string) AggregateFunc {
return func(s *sql.Selector) string {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
return sql.Max(s.C(field))
}
}
// Mean applies the "mean" aggregation function on the given field of each group.
func Mean(field string) AggregateFunc {
return func(s *sql.Selector) string {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
return sql.Avg(s.C(field))
}
}
// Min applies the "min" aggregation function on the given field of each group.
func Min(field string) AggregateFunc {
return func(s *sql.Selector) string {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
return sql.Min(s.C(field))
}
}
// Sum applies the "sum" aggregation function on the given field of each group.
func Sum(field string) AggregateFunc {
return func(s *sql.Selector) string {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
return sql.Sum(s.C(field))
}
}
// ValidationError returns when validating a field or edge fails.
type ValidationError struct {
Name string // Field or edge name.
err error
}
// Error implements the error interface.
func (e *ValidationError) Error() string {
return e.err.Error()
}
// Unwrap implements the errors.Wrapper interface.
func (e *ValidationError) Unwrap() error {
return e.err
}
// IsValidationError returns a boolean indicating whether the error is a validation error.
func IsValidationError(err error) bool {
if err == nil {
return false
}
var e *ValidationError
return errors.As(err, &e)
}
// NotFoundError returns when trying to fetch a specific entity and it was not found in the database.
type NotFoundError struct {
label string
}
// Error implements the error interface.
func (e *NotFoundError) Error() string {
return "ent: " + e.label + " not found"
}
// IsNotFound returns a boolean indicating whether the error is a not found error.
func IsNotFound(err error) bool {
if err == nil {
return false
}
var e *NotFoundError
return errors.As(err, &e)
}
// MaskNotFound masks not found error.
func MaskNotFound(err error) error {
if IsNotFound(err) {
return nil
}
return err
}
// NotSingularError returns when trying to fetch a singular entity and more then one was found in the database.
type NotSingularError struct {
label string
}
// Error implements the error interface.
func (e *NotSingularError) Error() string {
return "ent: " + e.label + " not singular"
}
// IsNotSingular returns a boolean indicating whether the error is a not singular error.
func IsNotSingular(err error) bool {
if err == nil {
return false
}
var e *NotSingularError
return errors.As(err, &e)
}
// NotLoadedError returns when trying to get a node that was not loaded by the query.
type NotLoadedError struct {
edge string
}
// Error implements the error interface.
func (e *NotLoadedError) Error() string {
return "ent: " + e.edge + " edge was not loaded"
}
// IsNotLoaded returns a boolean indicating whether the error is a not loaded error.
func IsNotLoaded(err error) bool {
if err == nil {
return false
}
var e *NotLoadedError
return errors.As(err, &e)
}
// ConstraintError returns when trying to create/update one or more entities and
// one or more of their constraints failed. For example, violation of edge or
// field uniqueness.
type ConstraintError struct {
msg string
wrap error
}
// Error implements the error interface.
func (e ConstraintError) Error() string {
return "ent: constraint failed: " + e.msg
}
// Unwrap implements the errors.Wrapper interface.
func (e *ConstraintError) Unwrap() error {
return e.wrap
}
// IsConstraintError returns a boolean indicating whether the error is a constraint failure.
func IsConstraintError(err error) bool {
if err == nil {
return false
}
var e *ConstraintError
return errors.As(err, &e)
}
// Code generated by entc, DO NOT EDIT.
package enttest
import (
"context"
"github.com/oceanbase/configserver/ent"
// required by schema hooks.
_ "github.com/oceanbase/configserver/ent/runtime"
"entgo.io/ent/dialect/sql/schema"
)
type (
// TestingT is the interface that is shared between
// testing.T and testing.B and used by enttest.
TestingT interface {
FailNow()
Error(...interface{})
}
// Option configures client creation.
Option func(*options)
options struct {
opts []ent.Option
migrateOpts []schema.MigrateOption
}
)
// WithOptions forwards options to client creation.
func WithOptions(opts ...ent.Option) Option {
return func(o *options) {
o.opts = append(o.opts, opts...)
}
}
// WithMigrateOptions forwards options to auto migration.
func WithMigrateOptions(opts ...schema.MigrateOption) Option {
return func(o *options) {
o.migrateOpts = append(o.migrateOpts, opts...)
}
}
func newOptions(opts []Option) *options {
o := &options{}
for _, opt := range opts {
opt(o)
}
return o
}
// Open calls ent.Open and auto-run migration.
func Open(t TestingT, driverName, dataSourceName string, opts ...Option) *ent.Client {
o := newOptions(opts)
c, err := ent.Open(driverName, dataSourceName, o.opts...)
if err != nil {
t.Error(err)
t.FailNow()
}
if err := c.Schema.Create(context.Background(), o.migrateOpts...); err != nil {
t.Error(err)
t.FailNow()
}
return c
}
// NewClient calls ent.NewClient and auto-run migration.
func NewClient(t TestingT, opts ...Option) *ent.Client {
o := newOptions(opts)
c := ent.NewClient(o.opts...)
if err := c.Schema.Create(context.Background(), o.migrateOpts...); err != nil {
t.Error(err)
t.FailNow()
}
return c
}
package ent
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate ./schema
// Code generated by entc, DO NOT EDIT.
package hook
import (
"context"
"fmt"
"github.com/oceanbase/configserver/ent"
)
// The ObClusterFunc type is an adapter to allow the use of ordinary
// function as ObCluster mutator.
type ObClusterFunc func(context.Context, *ent.ObClusterMutation) (ent.Value, error)
// Mutate calls f(ctx, m).
func (f ObClusterFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
mv, ok := m.(*ent.ObClusterMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.ObClusterMutation", m)
}
return f(ctx, mv)
}
// Condition is a hook condition function.
type Condition func(context.Context, ent.Mutation) bool
// And groups conditions with the AND operator.
func And(first, second Condition, rest ...Condition) Condition {
return func(ctx context.Context, m ent.Mutation) bool {
if !first(ctx, m) || !second(ctx, m) {
return false
}
for _, cond := range rest {
if !cond(ctx, m) {
return false
}
}
return true
}
}
// Or groups conditions with the OR operator.
func Or(first, second Condition, rest ...Condition) Condition {
return func(ctx context.Context, m ent.Mutation) bool {
if first(ctx, m) || second(ctx, m) {
return true
}
for _, cond := range rest {
if cond(ctx, m) {
return true
}
}
return false
}
}
// Not negates a given condition.
func Not(cond Condition) Condition {
return func(ctx context.Context, m ent.Mutation) bool {
return !cond(ctx, m)
}
}
// HasOp is a condition testing mutation operation.
func HasOp(op ent.Op) Condition {
return func(_ context.Context, m ent.Mutation) bool {
return m.Op().Is(op)
}
}
// HasAddedFields is a condition validating `.AddedField` on fields.
func HasAddedFields(field string, fields ...string) Condition {
return func(_ context.Context, m ent.Mutation) bool {
if _, exists := m.AddedField(field); !exists {
return false
}
for _, field := range fields {
if _, exists := m.AddedField(field); !exists {
return false
}
}
return true
}
}
// HasClearedFields is a condition validating `.FieldCleared` on fields.
func HasClearedFields(field string, fields ...string) Condition {
return func(_ context.Context, m ent.Mutation) bool {
if exists := m.FieldCleared(field); !exists {
return false
}
for _, field := range fields {
if exists := m.FieldCleared(field); !exists {
return false
}
}
return true
}
}
// HasFields is a condition validating `.Field` on fields.
func HasFields(field string, fields ...string) Condition {
return func(_ context.Context, m ent.Mutation) bool {
if _, exists := m.Field(field); !exists {
return false
}
for _, field := range fields {
if _, exists := m.Field(field); !exists {
return false
}
}
return true
}
}
// If executes the given hook under condition.
//
// hook.If(ComputeAverage, And(HasFields(...), HasAddedFields(...)))
//
func If(hk ent.Hook, cond Condition) ent.Hook {
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
if cond(ctx, m) {
return hk(next).Mutate(ctx, m)
}
return next.Mutate(ctx, m)
})
}
}
// On executes the given hook only for the given operation.
//
// hook.On(Log, ent.Delete|ent.Create)
//
func On(hk ent.Hook, op ent.Op) ent.Hook {
return If(hk, HasOp(op))
}
// Unless skips the given hook only for the given operation.
//
// hook.Unless(Log, ent.Update|ent.UpdateOne)
//
func Unless(hk ent.Hook, op ent.Op) ent.Hook {
return If(hk, Not(HasOp(op)))
}
// FixedError is a hook returning a fixed error.
func FixedError(err error) ent.Hook {
return func(ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(context.Context, ent.Mutation) (ent.Value, error) {
return nil, err
})
}
}
// Reject returns a hook that rejects all operations that match op.
//
// func (T) Hooks() []ent.Hook {
// return []ent.Hook{
// Reject(ent.Delete|ent.Update),
// }
// }
//
func Reject(op ent.Op) ent.Hook {
hk := FixedError(fmt.Errorf("%s operation is not allowed", op))
return On(hk, op)
}
// Chain acts as a list of hooks and is effectively immutable.
// Once created, it will always hold the same set of hooks in the same order.
type Chain struct {
hooks []ent.Hook
}
// NewChain creates a new chain of hooks.
func NewChain(hooks ...ent.Hook) Chain {
return Chain{append([]ent.Hook(nil), hooks...)}
}
// Hook chains the list of hooks and returns the final hook.
func (c Chain) Hook() ent.Hook {
return func(mutator ent.Mutator) ent.Mutator {
for i := len(c.hooks) - 1; i >= 0; i-- {
mutator = c.hooks[i](mutator)
}
return mutator
}
}
// Append extends a chain, adding the specified hook
// as the last ones in the mutation flow.
func (c Chain) Append(hooks ...ent.Hook) Chain {
newHooks := make([]ent.Hook, 0, len(c.hooks)+len(hooks))
newHooks = append(newHooks, c.hooks...)
newHooks = append(newHooks, hooks...)
return Chain{newHooks}
}
// Extend extends a chain, adding the specified chain
// as the last ones in the mutation flow.
func (c Chain) Extend(chain Chain) Chain {
return c.Append(chain.hooks...)
}
// Code generated by entc, DO NOT EDIT.
package migrate
import (
"context"
"fmt"
"io"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
)
var (
// WithGlobalUniqueID sets the universal ids options to the migration.
// If this option is enabled, ent migration will allocate a 1<<32 range
// for the ids of each entity (table).
// Note that this option cannot be applied on tables that already exist.
WithGlobalUniqueID = schema.WithGlobalUniqueID
// WithDropColumn sets the drop column option to the migration.
// If this option is enabled, ent migration will drop old columns
// that were used for both fields and edges. This defaults to false.
WithDropColumn = schema.WithDropColumn
// WithDropIndex sets the drop index option to the migration.
// If this option is enabled, ent migration will drop old indexes
// that were defined in the schema. This defaults to false.
// Note that unique constraints are defined using `UNIQUE INDEX`,
// and therefore, it's recommended to enable this option to get more
// flexibility in the schema changes.
WithDropIndex = schema.WithDropIndex
// WithFixture sets the foreign-key renaming option to the migration when upgrading
// ent from v0.1.0 (issue-#285). Defaults to false.
WithFixture = schema.WithFixture
// WithForeignKeys enables creating foreign-key in schema DDL. This defaults to true.
WithForeignKeys = schema.WithForeignKeys
)
// Schema is the API for creating, migrating and dropping a schema.
type Schema struct {
drv dialect.Driver
}
// NewSchema creates a new schema client.
func NewSchema(drv dialect.Driver) *Schema { return &Schema{drv: drv} }
// Create creates all schema resources.
func (s *Schema) Create(ctx context.Context, opts ...schema.MigrateOption) error {
migrate, err := schema.NewMigrate(s.drv, opts...)
if err != nil {
return fmt.Errorf("ent/migrate: %w", err)
}
return migrate.Create(ctx, Tables...)
}
// WriteTo writes the schema changes to w instead of running them against the database.
//
// if err := client.Schema.WriteTo(context.Background(), os.Stdout); err != nil {
// log.Fatal(err)
// }
//
func (s *Schema) WriteTo(ctx context.Context, w io.Writer, opts ...schema.MigrateOption) error {
drv := &schema.WriteDriver{
Writer: w,
Driver: s.drv,
}
migrate, err := schema.NewMigrate(drv, opts...)
if err != nil {
return fmt.Errorf("ent/migrate: %w", err)
}
return migrate.Create(ctx, Tables...)
}
// Code generated by entc, DO NOT EDIT.
package migrate
import (
"entgo.io/ent/dialect/sql/schema"
"entgo.io/ent/schema/field"
)
var (
// ObClustersColumns holds the columns for the "ob_clusters" table.
ObClustersColumns = []*schema.Column{
{Name: "id", Type: field.TypeInt, Increment: true},
{Name: "create_time", Type: field.TypeTime},
{Name: "update_time", Type: field.TypeTime},
{Name: "name", Type: field.TypeString},
{Name: "ob_cluster_id", Type: field.TypeInt64},
{Name: "type", Type: field.TypeString},
{Name: "rootservice_json", Type: field.TypeString, Size: 65536},
}
// ObClustersTable holds the schema information for the "ob_clusters" table.
ObClustersTable = &schema.Table{
Name: "ob_clusters",
Columns: ObClustersColumns,
PrimaryKey: []*schema.Column{ObClustersColumns[0]},
Indexes: []*schema.Index{
{
Name: "obcluster_update_time",
Unique: false,
Columns: []*schema.Column{ObClustersColumns[2]},
},
{
Name: "obcluster_name_ob_cluster_id",
Unique: true,
Columns: []*schema.Column{ObClustersColumns[3], ObClustersColumns[4]},
},
},
}
// Tables holds all the tables in the schema.
Tables = []*schema.Table{
ObClustersTable,
}
)
func init() {
}
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/oceanbase/configserver/ent/obcluster"
"github.com/oceanbase/configserver/ent/predicate"
"entgo.io/ent"
)
const (
// Operation types.
OpCreate = ent.OpCreate
OpDelete = ent.OpDelete
OpDeleteOne = ent.OpDeleteOne
OpUpdate = ent.OpUpdate
OpUpdateOne = ent.OpUpdateOne
// Node types.
TypeObCluster = "ObCluster"
)
// ObClusterMutation represents an operation that mutates the ObCluster nodes in the graph.
type ObClusterMutation struct {
config
op Op
typ string
id *int
create_time *time.Time
update_time *time.Time
name *string
ob_cluster_id *int64
addob_cluster_id *int64
_type *string
rootservice_json *string
clearedFields map[string]struct{}
done bool
oldValue func(context.Context) (*ObCluster, error)
predicates []predicate.ObCluster
}
var _ ent.Mutation = (*ObClusterMutation)(nil)
// obclusterOption allows management of the mutation configuration using functional options.
type obclusterOption func(*ObClusterMutation)
// newObClusterMutation creates new mutation for the ObCluster entity.
func newObClusterMutation(c config, op Op, opts ...obclusterOption) *ObClusterMutation {
m := &ObClusterMutation{
config: c,
op: op,
typ: TypeObCluster,
clearedFields: make(map[string]struct{}),
}
for _, opt := range opts {
opt(m)
}
return m
}
// withObClusterID sets the ID field of the mutation.
func withObClusterID(id int) obclusterOption {
return func(m *ObClusterMutation) {
var (
err error
once sync.Once
value *ObCluster
)
m.oldValue = func(ctx context.Context) (*ObCluster, error) {
once.Do(func() {
if m.done {
err = errors.New("querying old values post mutation is not allowed")
} else {
value, err = m.Client().ObCluster.Get(ctx, id)
}
})
return value, err
}
m.id = &id
}
}
// withObCluster sets the old ObCluster of the mutation.
func withObCluster(node *ObCluster) obclusterOption {
return func(m *ObClusterMutation) {
m.oldValue = func(context.Context) (*ObCluster, error) {
return node, nil
}
m.id = &node.ID
}
}
// Client returns a new `ent.Client` from the mutation. If the mutation was
// executed in a transaction (ent.Tx), a transactional client is returned.
func (m ObClusterMutation) Client() *Client {
client := &Client{config: m.config}
client.init()
return client
}
// Tx returns an `ent.Tx` for mutations that were executed in transactions;
// it returns an error otherwise.
func (m ObClusterMutation) Tx() (*Tx, error) {
if _, ok := m.driver.(*txDriver); !ok {
return nil, errors.New("ent: mutation is not running in a transaction")
}
tx := &Tx{config: m.config}
tx.init()
return tx, nil
}
// ID returns the ID value in the mutation. Note that the ID is only available
// if it was provided to the builder or after it was returned from the database.
func (m *ObClusterMutation) ID() (id int, exists bool) {
if m.id == nil {
return
}
return *m.id, true
}
// IDs queries the database and returns the entity ids that match the mutation's predicate.
// That means, if the mutation is applied within a transaction with an isolation level such
// as sql.LevelSerializable, the returned ids match the ids of the rows that will be updated
// or updated by the mutation.
func (m *ObClusterMutation) IDs(ctx context.Context) ([]int, error) {
switch {
case m.op.Is(OpUpdateOne | OpDeleteOne):
id, exists := m.ID()
if exists {
return []int{id}, nil
}
fallthrough
case m.op.Is(OpUpdate | OpDelete):
return m.Client().ObCluster.Query().Where(m.predicates...).IDs(ctx)
default:
return nil, fmt.Errorf("IDs is not allowed on %s operations", m.op)
}
}
// SetCreateTime sets the "create_time" field.
func (m *ObClusterMutation) SetCreateTime(t time.Time) {
m.create_time = &t
}
// CreateTime returns the value of the "create_time" field in the mutation.
func (m *ObClusterMutation) CreateTime() (r time.Time, exists bool) {
v := m.create_time
if v == nil {
return
}
return *v, true
}
// OldCreateTime returns the old "create_time" field's value of the ObCluster entity.
// If the ObCluster object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *ObClusterMutation) OldCreateTime(ctx context.Context) (v time.Time, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldCreateTime is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldCreateTime requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldCreateTime: %w", err)
}
return oldValue.CreateTime, nil
}
// ResetCreateTime resets all changes to the "create_time" field.
func (m *ObClusterMutation) ResetCreateTime() {
m.create_time = nil
}
// SetUpdateTime sets the "update_time" field.
func (m *ObClusterMutation) SetUpdateTime(t time.Time) {
m.update_time = &t
}
// UpdateTime returns the value of the "update_time" field in the mutation.
func (m *ObClusterMutation) UpdateTime() (r time.Time, exists bool) {
v := m.update_time
if v == nil {
return
}
return *v, true
}
// OldUpdateTime returns the old "update_time" field's value of the ObCluster entity.
// If the ObCluster object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *ObClusterMutation) OldUpdateTime(ctx context.Context) (v time.Time, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldUpdateTime is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldUpdateTime requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldUpdateTime: %w", err)
}
return oldValue.UpdateTime, nil
}
// ResetUpdateTime resets all changes to the "update_time" field.
func (m *ObClusterMutation) ResetUpdateTime() {
m.update_time = nil
}
// SetName sets the "name" field.
func (m *ObClusterMutation) SetName(s string) {
m.name = &s
}
// Name returns the value of the "name" field in the mutation.
func (m *ObClusterMutation) Name() (r string, exists bool) {
v := m.name
if v == nil {
return
}
return *v, true
}
// OldName returns the old "name" field's value of the ObCluster entity.
// If the ObCluster object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *ObClusterMutation) OldName(ctx context.Context) (v string, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldName is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldName requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldName: %w", err)
}
return oldValue.Name, nil
}
// ResetName resets all changes to the "name" field.
func (m *ObClusterMutation) ResetName() {
m.name = nil
}
// SetObClusterID sets the "ob_cluster_id" field.
func (m *ObClusterMutation) SetObClusterID(i int64) {
m.ob_cluster_id = &i
m.addob_cluster_id = nil
}
// ObClusterID returns the value of the "ob_cluster_id" field in the mutation.
func (m *ObClusterMutation) ObClusterID() (r int64, exists bool) {
v := m.ob_cluster_id
if v == nil {
return
}
return *v, true
}
// OldObClusterID returns the old "ob_cluster_id" field's value of the ObCluster entity.
// If the ObCluster object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *ObClusterMutation) OldObClusterID(ctx context.Context) (v int64, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldObClusterID is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldObClusterID requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldObClusterID: %w", err)
}
return oldValue.ObClusterID, nil
}
// AddObClusterID adds i to the "ob_cluster_id" field.
func (m *ObClusterMutation) AddObClusterID(i int64) {
if m.addob_cluster_id != nil {
*m.addob_cluster_id += i
} else {
m.addob_cluster_id = &i
}
}
// AddedObClusterID returns the value that was added to the "ob_cluster_id" field in this mutation.
func (m *ObClusterMutation) AddedObClusterID() (r int64, exists bool) {
v := m.addob_cluster_id
if v == nil {
return
}
return *v, true
}
// ResetObClusterID resets all changes to the "ob_cluster_id" field.
func (m *ObClusterMutation) ResetObClusterID() {
m.ob_cluster_id = nil
m.addob_cluster_id = nil
}
// SetType sets the "type" field.
func (m *ObClusterMutation) SetType(s string) {
m._type = &s
}
// GetType returns the value of the "type" field in the mutation.
func (m *ObClusterMutation) GetType() (r string, exists bool) {
v := m._type
if v == nil {
return
}
return *v, true
}
// OldType returns the old "type" field's value of the ObCluster entity.
// If the ObCluster object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *ObClusterMutation) OldType(ctx context.Context) (v string, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldType is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldType requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldType: %w", err)
}
return oldValue.Type, nil
}
// ResetType resets all changes to the "type" field.
func (m *ObClusterMutation) ResetType() {
m._type = nil
}
// SetRootserviceJSON sets the "rootservice_json" field.
func (m *ObClusterMutation) SetRootserviceJSON(s string) {
m.rootservice_json = &s
}
// RootserviceJSON returns the value of the "rootservice_json" field in the mutation.
func (m *ObClusterMutation) RootserviceJSON() (r string, exists bool) {
v := m.rootservice_json
if v == nil {
return
}
return *v, true
}
// OldRootserviceJSON returns the old "rootservice_json" field's value of the ObCluster entity.
// If the ObCluster object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *ObClusterMutation) OldRootserviceJSON(ctx context.Context) (v string, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldRootserviceJSON is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldRootserviceJSON requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldRootserviceJSON: %w", err)
}
return oldValue.RootserviceJSON, nil
}
// ResetRootserviceJSON resets all changes to the "rootservice_json" field.
func (m *ObClusterMutation) ResetRootserviceJSON() {
m.rootservice_json = nil
}
// Where appends a list predicates to the ObClusterMutation builder.
func (m *ObClusterMutation) Where(ps ...predicate.ObCluster) {
m.predicates = append(m.predicates, ps...)
}
// Op returns the operation name.
func (m *ObClusterMutation) Op() Op {
return m.op
}
// Type returns the node type of this mutation (ObCluster).
func (m *ObClusterMutation) Type() string {
return m.typ
}
// Fields returns all fields that were changed during this mutation. Note that in
// order to get all numeric fields that were incremented/decremented, call
// AddedFields().
func (m *ObClusterMutation) Fields() []string {
fields := make([]string, 0, 6)
if m.create_time != nil {
fields = append(fields, obcluster.FieldCreateTime)
}
if m.update_time != nil {
fields = append(fields, obcluster.FieldUpdateTime)
}
if m.name != nil {
fields = append(fields, obcluster.FieldName)
}
if m.ob_cluster_id != nil {
fields = append(fields, obcluster.FieldObClusterID)
}
if m._type != nil {
fields = append(fields, obcluster.FieldType)
}
if m.rootservice_json != nil {
fields = append(fields, obcluster.FieldRootserviceJSON)
}
return fields
}
// Field returns the value of a field with the given name. The second boolean
// return value indicates that this field was not set, or was not defined in the
// schema.
func (m *ObClusterMutation) Field(name string) (ent.Value, bool) {
switch name {
case obcluster.FieldCreateTime:
return m.CreateTime()
case obcluster.FieldUpdateTime:
return m.UpdateTime()
case obcluster.FieldName:
return m.Name()
case obcluster.FieldObClusterID:
return m.ObClusterID()
case obcluster.FieldType:
return m.GetType()
case obcluster.FieldRootserviceJSON:
return m.RootserviceJSON()
}
return nil, false
}
// OldField returns the old value of the field from the database. An error is
// returned if the mutation operation is not UpdateOne, or the query to the
// database failed.
func (m *ObClusterMutation) OldField(ctx context.Context, name string) (ent.Value, error) {
switch name {
case obcluster.FieldCreateTime:
return m.OldCreateTime(ctx)
case obcluster.FieldUpdateTime:
return m.OldUpdateTime(ctx)
case obcluster.FieldName:
return m.OldName(ctx)
case obcluster.FieldObClusterID:
return m.OldObClusterID(ctx)
case obcluster.FieldType:
return m.OldType(ctx)
case obcluster.FieldRootserviceJSON:
return m.OldRootserviceJSON(ctx)
}
return nil, fmt.Errorf("unknown ObCluster field %s", name)
}
// SetField sets the value of a field with the given name. It returns an error if
// the field is not defined in the schema, or if the type mismatched the field
// type.
func (m *ObClusterMutation) SetField(name string, value ent.Value) error {
switch name {
case obcluster.FieldCreateTime:
v, ok := value.(time.Time)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetCreateTime(v)
return nil
case obcluster.FieldUpdateTime:
v, ok := value.(time.Time)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetUpdateTime(v)
return nil
case obcluster.FieldName:
v, ok := value.(string)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetName(v)
return nil
case obcluster.FieldObClusterID:
v, ok := value.(int64)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetObClusterID(v)
return nil
case obcluster.FieldType:
v, ok := value.(string)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetType(v)
return nil
case obcluster.FieldRootserviceJSON:
v, ok := value.(string)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetRootserviceJSON(v)
return nil
}
return fmt.Errorf("unknown ObCluster field %s", name)
}
// AddedFields returns all numeric fields that were incremented/decremented during
// this mutation.
func (m *ObClusterMutation) AddedFields() []string {
var fields []string
if m.addob_cluster_id != nil {
fields = append(fields, obcluster.FieldObClusterID)
}
return fields
}
// AddedField returns the numeric value that was incremented/decremented on a field
// with the given name. The second boolean return value indicates that this field
// was not set, or was not defined in the schema.
func (m *ObClusterMutation) AddedField(name string) (ent.Value, bool) {
switch name {
case obcluster.FieldObClusterID:
return m.AddedObClusterID()
}
return nil, false
}
// AddField adds the value to the field with the given name. It returns an error if
// the field is not defined in the schema, or if the type mismatched the field
// type.
func (m *ObClusterMutation) AddField(name string, value ent.Value) error {
switch name {
case obcluster.FieldObClusterID:
v, ok := value.(int64)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.AddObClusterID(v)
return nil
}
return fmt.Errorf("unknown ObCluster numeric field %s", name)
}
// ClearedFields returns all nullable fields that were cleared during this
// mutation.
func (m *ObClusterMutation) ClearedFields() []string {
return nil
}
// FieldCleared returns a boolean indicating if a field with the given name was
// cleared in this mutation.
func (m *ObClusterMutation) FieldCleared(name string) bool {
_, ok := m.clearedFields[name]
return ok
}
// ClearField clears the value of the field with the given name. It returns an
// error if the field is not defined in the schema.
func (m *ObClusterMutation) ClearField(name string) error {
return fmt.Errorf("unknown ObCluster nullable field %s", name)
}
// ResetField resets all changes in the mutation for the field with the given name.
// It returns an error if the field is not defined in the schema.
func (m *ObClusterMutation) ResetField(name string) error {
switch name {
case obcluster.FieldCreateTime:
m.ResetCreateTime()
return nil
case obcluster.FieldUpdateTime:
m.ResetUpdateTime()
return nil
case obcluster.FieldName:
m.ResetName()
return nil
case obcluster.FieldObClusterID:
m.ResetObClusterID()
return nil
case obcluster.FieldType:
m.ResetType()
return nil
case obcluster.FieldRootserviceJSON:
m.ResetRootserviceJSON()
return nil
}
return fmt.Errorf("unknown ObCluster field %s", name)
}
// AddedEdges returns all edge names that were set/added in this mutation.
func (m *ObClusterMutation) AddedEdges() []string {
edges := make([]string, 0, 0)
return edges
}
// AddedIDs returns all IDs (to other nodes) that were added for the given edge
// name in this mutation.
func (m *ObClusterMutation) AddedIDs(name string) []ent.Value {
return nil
}
// RemovedEdges returns all edge names that were removed in this mutation.
func (m *ObClusterMutation) RemovedEdges() []string {
edges := make([]string, 0, 0)
return edges
}
// RemovedIDs returns all IDs (to other nodes) that were removed for the edge with
// the given name in this mutation.
func (m *ObClusterMutation) RemovedIDs(name string) []ent.Value {
return nil
}
// ClearedEdges returns all edge names that were cleared in this mutation.
func (m *ObClusterMutation) ClearedEdges() []string {
edges := make([]string, 0, 0)
return edges
}
// EdgeCleared returns a boolean which indicates if the edge with the given name
// was cleared in this mutation.
func (m *ObClusterMutation) EdgeCleared(name string) bool {
return false
}
// ClearEdge clears the value of the edge with the given name. It returns an error
// if that edge is not defined in the schema.
func (m *ObClusterMutation) ClearEdge(name string) error {
return fmt.Errorf("unknown ObCluster unique edge %s", name)
}
// ResetEdge resets all changes to the edge with the given name in this mutation.
// It returns an error if the edge is not defined in the schema.
func (m *ObClusterMutation) ResetEdge(name string) error {
return fmt.Errorf("unknown ObCluster edge %s", name)
}
// Code generated by entc, DO NOT EDIT.
package ent
import (
"fmt"
"strings"
"time"
"entgo.io/ent/dialect/sql"
"github.com/oceanbase/configserver/ent/obcluster"
)
// ObCluster is the model entity for the ObCluster schema.
type ObCluster struct {
config `json:"-"`
// ID of the ent.
ID int `json:"id,omitempty"`
// CreateTime holds the value of the "create_time" field.
CreateTime time.Time `json:"create_time,omitempty"`
// UpdateTime holds the value of the "update_time" field.
UpdateTime time.Time `json:"update_time,omitempty"`
// Name holds the value of the "name" field.
Name string `json:"name,omitempty"`
// ObClusterID holds the value of the "ob_cluster_id" field.
ObClusterID int64 `json:"ob_cluster_id,omitempty"`
// Type holds the value of the "type" field.
Type string `json:"type,omitempty"`
// RootserviceJSON holds the value of the "rootservice_json" field.
RootserviceJSON string `json:"rootservice_json,omitempty"`
}
// scanValues returns the types for scanning values from sql.Rows.
func (*ObCluster) scanValues(columns []string) ([]interface{}, error) {
values := make([]interface{}, len(columns))
for i := range columns {
switch columns[i] {
case obcluster.FieldID, obcluster.FieldObClusterID:
values[i] = new(sql.NullInt64)
case obcluster.FieldName, obcluster.FieldType, obcluster.FieldRootserviceJSON:
values[i] = new(sql.NullString)
case obcluster.FieldCreateTime, obcluster.FieldUpdateTime:
values[i] = new(sql.NullTime)
default:
return nil, fmt.Errorf("unexpected column %q for type ObCluster", columns[i])
}
}
return values, nil
}
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the ObCluster fields.
func (oc *ObCluster) assignValues(columns []string, values []interface{}) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
for i := range columns {
switch columns[i] {
case obcluster.FieldID:
value, ok := values[i].(*sql.NullInt64)
if !ok {
return fmt.Errorf("unexpected type %T for field id", value)
}
oc.ID = int(value.Int64)
case obcluster.FieldCreateTime:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field create_time", values[i])
} else if value.Valid {
oc.CreateTime = value.Time
}
case obcluster.FieldUpdateTime:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field update_time", values[i])
} else if value.Valid {
oc.UpdateTime = value.Time
}
case obcluster.FieldName:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field name", values[i])
} else if value.Valid {
oc.Name = value.String
}
case obcluster.FieldObClusterID:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field ob_cluster_id", values[i])
} else if value.Valid {
oc.ObClusterID = value.Int64
}
case obcluster.FieldType:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field type", values[i])
} else if value.Valid {
oc.Type = value.String
}
case obcluster.FieldRootserviceJSON:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field rootservice_json", values[i])
} else if value.Valid {
oc.RootserviceJSON = value.String
}
}
}
return nil
}
// Update returns a builder for updating this ObCluster.
// Note that you need to call ObCluster.Unwrap() before calling this method if this ObCluster
// was returned from a transaction, and the transaction was committed or rolled back.
func (oc *ObCluster) Update() *ObClusterUpdateOne {
return (&ObClusterClient{config: oc.config}).UpdateOne(oc)
}
// Unwrap unwraps the ObCluster entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (oc *ObCluster) Unwrap() *ObCluster {
tx, ok := oc.config.driver.(*txDriver)
if !ok {
panic("ent: ObCluster is not a transactional entity")
}
oc.config.driver = tx.drv
return oc
}
// String implements the fmt.Stringer.
func (oc *ObCluster) String() string {
var builder strings.Builder
builder.WriteString("ObCluster(")
builder.WriteString(fmt.Sprintf("id=%v", oc.ID))
builder.WriteString(", create_time=")
builder.WriteString(oc.CreateTime.Format(time.ANSIC))
builder.WriteString(", update_time=")
builder.WriteString(oc.UpdateTime.Format(time.ANSIC))
builder.WriteString(", name=")
builder.WriteString(oc.Name)
builder.WriteString(", ob_cluster_id=")
builder.WriteString(fmt.Sprintf("%v", oc.ObClusterID))
builder.WriteString(", type=")
builder.WriteString(oc.Type)
builder.WriteString(", rootservice_json=")
builder.WriteString(oc.RootserviceJSON)
builder.WriteByte(')')
return builder.String()
}
// ObClusters is a parsable slice of ObCluster.
type ObClusters []*ObCluster
func (oc ObClusters) config(cfg config) {
for _i := range oc {
oc[_i].config = cfg
}
}
// Code generated by entc, DO NOT EDIT.
package obcluster
import (
"time"
)
const (
// Label holds the string label denoting the obcluster type in the database.
Label = "ob_cluster"
// FieldID holds the string denoting the id field in the database.
FieldID = "id"
// FieldCreateTime holds the string denoting the create_time field in the database.
FieldCreateTime = "create_time"
// FieldUpdateTime holds the string denoting the update_time field in the database.
FieldUpdateTime = "update_time"
// FieldName holds the string denoting the name field in the database.
FieldName = "name"
// FieldObClusterID holds the string denoting the ob_cluster_id field in the database.
FieldObClusterID = "ob_cluster_id"
// FieldType holds the string denoting the type field in the database.
FieldType = "type"
// FieldRootserviceJSON holds the string denoting the rootservice_json field in the database.
FieldRootserviceJSON = "rootservice_json"
// Table holds the table name of the obcluster in the database.
Table = "ob_clusters"
)
// Columns holds all SQL columns for obcluster fields.
var Columns = []string{
FieldID,
FieldCreateTime,
FieldUpdateTime,
FieldName,
FieldObClusterID,
FieldType,
FieldRootserviceJSON,
}
// ValidColumn reports if the column name is valid (part of the table columns).
func ValidColumn(column string) bool {
for i := range Columns {
if column == Columns[i] {
return true
}
}
return false
}
var (
// DefaultCreateTime holds the default value on creation for the "create_time" field.
DefaultCreateTime func() time.Time
// DefaultUpdateTime holds the default value on creation for the "update_time" field.
DefaultUpdateTime func() time.Time
// UpdateDefaultUpdateTime holds the default value on update for the "update_time" field.
UpdateDefaultUpdateTime func() time.Time
// ObClusterIDValidator is a validator for the "ob_cluster_id" field. It is called by the builders before save.
ObClusterIDValidator func(int64) error
)
此差异已折叠。
此差异已折叠。
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"fmt"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/oceanbase/configserver/ent/obcluster"
"github.com/oceanbase/configserver/ent/predicate"
)
// ObClusterDelete is the builder for deleting a ObCluster entity.
type ObClusterDelete struct {
config
hooks []Hook
mutation *ObClusterMutation
}
// Where appends a list predicates to the ObClusterDelete builder.
func (ocd *ObClusterDelete) Where(ps ...predicate.ObCluster) *ObClusterDelete {
ocd.mutation.Where(ps...)
return ocd
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (ocd *ObClusterDelete) Exec(ctx context.Context) (int, error) {
var (
err error
affected int
)
if len(ocd.hooks) == 0 {
affected, err = ocd.sqlExec(ctx)
} else {
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*ObClusterMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
ocd.mutation = mutation
affected, err = ocd.sqlExec(ctx)
mutation.done = true
return affected, err
})
for i := len(ocd.hooks) - 1; i >= 0; i-- {
if ocd.hooks[i] == nil {
return 0, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
}
mut = ocd.hooks[i](mut)
}
if _, err := mut.Mutate(ctx, ocd.mutation); err != nil {
return 0, err
}
}
return affected, err
}
// ExecX is like Exec, but panics if an error occurs.
func (ocd *ObClusterDelete) ExecX(ctx context.Context) int {
n, err := ocd.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (ocd *ObClusterDelete) sqlExec(ctx context.Context) (int, error) {
_spec := &sqlgraph.DeleteSpec{
Node: &sqlgraph.NodeSpec{
Table: obcluster.Table,
ID: &sqlgraph.FieldSpec{
Type: field.TypeInt,
Column: obcluster.FieldID,
},
},
}
if ps := ocd.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
return sqlgraph.DeleteNodes(ctx, ocd.driver, _spec)
}
// ObClusterDeleteOne is the builder for deleting a single ObCluster entity.
type ObClusterDeleteOne struct {
ocd *ObClusterDelete
}
// Exec executes the deletion query.
func (ocdo *ObClusterDeleteOne) Exec(ctx context.Context) error {
n, err := ocdo.ocd.Exec(ctx)
switch {
case err != nil:
return err
case n == 0:
return &NotFoundError{obcluster.Label}
default:
return nil
}
}
// ExecX is like Exec, but panics if an error occurs.
func (ocdo *ObClusterDeleteOne) ExecX(ctx context.Context) {
ocdo.ocd.ExecX(ctx)
}
此差异已折叠。
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/oceanbase/configserver/ent/obcluster"
"github.com/oceanbase/configserver/ent/predicate"
)
// ObClusterUpdate is the builder for updating ObCluster entities.
type ObClusterUpdate struct {
config
hooks []Hook
mutation *ObClusterMutation
}
// Where appends a list predicates to the ObClusterUpdate builder.
func (ocu *ObClusterUpdate) Where(ps ...predicate.ObCluster) *ObClusterUpdate {
ocu.mutation.Where(ps...)
return ocu
}
// SetCreateTime sets the "create_time" field.
func (ocu *ObClusterUpdate) SetCreateTime(t time.Time) *ObClusterUpdate {
ocu.mutation.SetCreateTime(t)
return ocu
}
// SetNillableCreateTime sets the "create_time" field if the given value is not nil.
func (ocu *ObClusterUpdate) SetNillableCreateTime(t *time.Time) *ObClusterUpdate {
if t != nil {
ocu.SetCreateTime(*t)
}
return ocu
}
// SetUpdateTime sets the "update_time" field.
func (ocu *ObClusterUpdate) SetUpdateTime(t time.Time) *ObClusterUpdate {
ocu.mutation.SetUpdateTime(t)
return ocu
}
// SetName sets the "name" field.
func (ocu *ObClusterUpdate) SetName(s string) *ObClusterUpdate {
ocu.mutation.SetName(s)
return ocu
}
// SetObClusterID sets the "ob_cluster_id" field.
func (ocu *ObClusterUpdate) SetObClusterID(i int64) *ObClusterUpdate {
ocu.mutation.ResetObClusterID()
ocu.mutation.SetObClusterID(i)
return ocu
}
// AddObClusterID adds i to the "ob_cluster_id" field.
func (ocu *ObClusterUpdate) AddObClusterID(i int64) *ObClusterUpdate {
ocu.mutation.AddObClusterID(i)
return ocu
}
// SetType sets the "type" field.
func (ocu *ObClusterUpdate) SetType(s string) *ObClusterUpdate {
ocu.mutation.SetType(s)
return ocu
}
// SetRootserviceJSON sets the "rootservice_json" field.
func (ocu *ObClusterUpdate) SetRootserviceJSON(s string) *ObClusterUpdate {
ocu.mutation.SetRootserviceJSON(s)
return ocu
}
// Mutation returns the ObClusterMutation object of the builder.
func (ocu *ObClusterUpdate) Mutation() *ObClusterMutation {
return ocu.mutation
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (ocu *ObClusterUpdate) Save(ctx context.Context) (int, error) {
var (
err error
affected int
)
ocu.defaults()
if len(ocu.hooks) == 0 {
if err = ocu.check(); err != nil {
return 0, err
}
affected, err = ocu.sqlSave(ctx)
} else {
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*ObClusterMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
if err = ocu.check(); err != nil {
return 0, err
}
ocu.mutation = mutation
affected, err = ocu.sqlSave(ctx)
mutation.done = true
return affected, err
})
for i := len(ocu.hooks) - 1; i >= 0; i-- {
if ocu.hooks[i] == nil {
return 0, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
}
mut = ocu.hooks[i](mut)
}
if _, err := mut.Mutate(ctx, ocu.mutation); err != nil {
return 0, err
}
}
return affected, err
}
// SaveX is like Save, but panics if an error occurs.
func (ocu *ObClusterUpdate) SaveX(ctx context.Context) int {
affected, err := ocu.Save(ctx)
if err != nil {
panic(err)
}
return affected
}
// Exec executes the query.
func (ocu *ObClusterUpdate) Exec(ctx context.Context) error {
_, err := ocu.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (ocu *ObClusterUpdate) ExecX(ctx context.Context) {
if err := ocu.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (ocu *ObClusterUpdate) defaults() {
if _, ok := ocu.mutation.UpdateTime(); !ok {
v := obcluster.UpdateDefaultUpdateTime()
ocu.mutation.SetUpdateTime(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (ocu *ObClusterUpdate) check() error {
if v, ok := ocu.mutation.ObClusterID(); ok {
if err := obcluster.ObClusterIDValidator(v); err != nil {
return &ValidationError{Name: "ob_cluster_id", err: fmt.Errorf(`ent: validator failed for field "ObCluster.ob_cluster_id": %w`, err)}
}
}
return nil
}
func (ocu *ObClusterUpdate) sqlSave(ctx context.Context) (n int, err error) {
_spec := &sqlgraph.UpdateSpec{
Node: &sqlgraph.NodeSpec{
Table: obcluster.Table,
Columns: obcluster.Columns,
ID: &sqlgraph.FieldSpec{
Type: field.TypeInt,
Column: obcluster.FieldID,
},
},
}
if ps := ocu.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := ocu.mutation.CreateTime(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: obcluster.FieldCreateTime,
})
}
if value, ok := ocu.mutation.UpdateTime(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: obcluster.FieldUpdateTime,
})
}
if value, ok := ocu.mutation.Name(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: obcluster.FieldName,
})
}
if value, ok := ocu.mutation.ObClusterID(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: obcluster.FieldObClusterID,
})
}
if value, ok := ocu.mutation.AddedObClusterID(); ok {
_spec.Fields.Add = append(_spec.Fields.Add, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: obcluster.FieldObClusterID,
})
}
if value, ok := ocu.mutation.GetType(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: obcluster.FieldType,
})
}
if value, ok := ocu.mutation.RootserviceJSON(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: obcluster.FieldRootserviceJSON,
})
}
if n, err = sqlgraph.UpdateNodes(ctx, ocu.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{obcluster.Label}
} else if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{err.Error(), err}
}
return 0, err
}
return n, nil
}
// ObClusterUpdateOne is the builder for updating a single ObCluster entity.
type ObClusterUpdateOne struct {
config
fields []string
hooks []Hook
mutation *ObClusterMutation
}
// SetCreateTime sets the "create_time" field.
func (ocuo *ObClusterUpdateOne) SetCreateTime(t time.Time) *ObClusterUpdateOne {
ocuo.mutation.SetCreateTime(t)
return ocuo
}
// SetNillableCreateTime sets the "create_time" field if the given value is not nil.
func (ocuo *ObClusterUpdateOne) SetNillableCreateTime(t *time.Time) *ObClusterUpdateOne {
if t != nil {
ocuo.SetCreateTime(*t)
}
return ocuo
}
// SetUpdateTime sets the "update_time" field.
func (ocuo *ObClusterUpdateOne) SetUpdateTime(t time.Time) *ObClusterUpdateOne {
ocuo.mutation.SetUpdateTime(t)
return ocuo
}
// SetName sets the "name" field.
func (ocuo *ObClusterUpdateOne) SetName(s string) *ObClusterUpdateOne {
ocuo.mutation.SetName(s)
return ocuo
}
// SetObClusterID sets the "ob_cluster_id" field.
func (ocuo *ObClusterUpdateOne) SetObClusterID(i int64) *ObClusterUpdateOne {
ocuo.mutation.ResetObClusterID()
ocuo.mutation.SetObClusterID(i)
return ocuo
}
// AddObClusterID adds i to the "ob_cluster_id" field.
func (ocuo *ObClusterUpdateOne) AddObClusterID(i int64) *ObClusterUpdateOne {
ocuo.mutation.AddObClusterID(i)
return ocuo
}
// SetType sets the "type" field.
func (ocuo *ObClusterUpdateOne) SetType(s string) *ObClusterUpdateOne {
ocuo.mutation.SetType(s)
return ocuo
}
// SetRootserviceJSON sets the "rootservice_json" field.
func (ocuo *ObClusterUpdateOne) SetRootserviceJSON(s string) *ObClusterUpdateOne {
ocuo.mutation.SetRootserviceJSON(s)
return ocuo
}
// Mutation returns the ObClusterMutation object of the builder.
func (ocuo *ObClusterUpdateOne) Mutation() *ObClusterMutation {
return ocuo.mutation
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (ocuo *ObClusterUpdateOne) Select(field string, fields ...string) *ObClusterUpdateOne {
ocuo.fields = append([]string{field}, fields...)
return ocuo
}
// Save executes the query and returns the updated ObCluster entity.
func (ocuo *ObClusterUpdateOne) Save(ctx context.Context) (*ObCluster, error) {
var (
err error
node *ObCluster
)
ocuo.defaults()
if len(ocuo.hooks) == 0 {
if err = ocuo.check(); err != nil {
return nil, err
}
node, err = ocuo.sqlSave(ctx)
} else {
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*ObClusterMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
if err = ocuo.check(); err != nil {
return nil, err
}
ocuo.mutation = mutation
node, err = ocuo.sqlSave(ctx)
mutation.done = true
return node, err
})
for i := len(ocuo.hooks) - 1; i >= 0; i-- {
if ocuo.hooks[i] == nil {
return nil, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
}
mut = ocuo.hooks[i](mut)
}
if _, err := mut.Mutate(ctx, ocuo.mutation); err != nil {
return nil, err
}
}
return node, err
}
// SaveX is like Save, but panics if an error occurs.
func (ocuo *ObClusterUpdateOne) SaveX(ctx context.Context) *ObCluster {
node, err := ocuo.Save(ctx)
if err != nil {
panic(err)
}
return node
}
// Exec executes the query on the entity.
func (ocuo *ObClusterUpdateOne) Exec(ctx context.Context) error {
_, err := ocuo.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (ocuo *ObClusterUpdateOne) ExecX(ctx context.Context) {
if err := ocuo.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (ocuo *ObClusterUpdateOne) defaults() {
if _, ok := ocuo.mutation.UpdateTime(); !ok {
v := obcluster.UpdateDefaultUpdateTime()
ocuo.mutation.SetUpdateTime(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (ocuo *ObClusterUpdateOne) check() error {
if v, ok := ocuo.mutation.ObClusterID(); ok {
if err := obcluster.ObClusterIDValidator(v); err != nil {
return &ValidationError{Name: "ob_cluster_id", err: fmt.Errorf(`ent: validator failed for field "ObCluster.ob_cluster_id": %w`, err)}
}
}
return nil
}
func (ocuo *ObClusterUpdateOne) sqlSave(ctx context.Context) (_node *ObCluster, err error) {
_spec := &sqlgraph.UpdateSpec{
Node: &sqlgraph.NodeSpec{
Table: obcluster.Table,
Columns: obcluster.Columns,
ID: &sqlgraph.FieldSpec{
Type: field.TypeInt,
Column: obcluster.FieldID,
},
},
}
id, ok := ocuo.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "ObCluster.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := ocuo.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, obcluster.FieldID)
for _, f := range fields {
if !obcluster.ValidColumn(f) {
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
if f != obcluster.FieldID {
_spec.Node.Columns = append(_spec.Node.Columns, f)
}
}
}
if ps := ocuo.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := ocuo.mutation.CreateTime(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: obcluster.FieldCreateTime,
})
}
if value, ok := ocuo.mutation.UpdateTime(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: obcluster.FieldUpdateTime,
})
}
if value, ok := ocuo.mutation.Name(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: obcluster.FieldName,
})
}
if value, ok := ocuo.mutation.ObClusterID(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: obcluster.FieldObClusterID,
})
}
if value, ok := ocuo.mutation.AddedObClusterID(); ok {
_spec.Fields.Add = append(_spec.Fields.Add, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: obcluster.FieldObClusterID,
})
}
if value, ok := ocuo.mutation.GetType(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: obcluster.FieldType,
})
}
if value, ok := ocuo.mutation.RootserviceJSON(); ok {
_spec.Fields.Set = append(_spec.Fields.Set, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: obcluster.FieldRootserviceJSON,
})
}
_node = &ObCluster{config: ocuo.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, ocuo.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{obcluster.Label}
} else if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{err.Error(), err}
}
return nil, err
}
return _node, nil
}
// Code generated by entc, DO NOT EDIT.
package predicate
import (
"entgo.io/ent/dialect/sql"
)
// ObCluster is the predicate function for obcluster builders.
type ObCluster func(*sql.Selector)
// Code generated by entc, DO NOT EDIT.
package ent
import (
"time"
"github.com/oceanbase/configserver/ent/obcluster"
"github.com/oceanbase/configserver/ent/schema"
)
// The init function reads all schema descriptors with runtime code
// (default values, validators, hooks and policies) and stitches it
// to their package variables.
func init() {
obclusterFields := schema.ObCluster{}.Fields()
_ = obclusterFields
// obclusterDescCreateTime is the schema descriptor for create_time field.
obclusterDescCreateTime := obclusterFields[0].Descriptor()
// obcluster.DefaultCreateTime holds the default value on creation for the create_time field.
obcluster.DefaultCreateTime = obclusterDescCreateTime.Default.(func() time.Time)
// obclusterDescUpdateTime is the schema descriptor for update_time field.
obclusterDescUpdateTime := obclusterFields[1].Descriptor()
// obcluster.DefaultUpdateTime holds the default value on creation for the update_time field.
obcluster.DefaultUpdateTime = obclusterDescUpdateTime.Default.(func() time.Time)
// obcluster.UpdateDefaultUpdateTime holds the default value on update for the update_time field.
obcluster.UpdateDefaultUpdateTime = obclusterDescUpdateTime.UpdateDefault.(func() time.Time)
// obclusterDescObClusterID is the schema descriptor for ob_cluster_id field.
obclusterDescObClusterID := obclusterFields[3].Descriptor()
// obcluster.ObClusterIDValidator is a validator for the "ob_cluster_id" field. It is called by the builders before save.
obcluster.ObClusterIDValidator = obclusterDescObClusterID.Validators[0].(func(int64) error)
}
// Code generated by entc, DO NOT EDIT.
package runtime
// The schema-stitching logic is generated in github.com/oceanbase/configserver/ent/runtime.go
const (
Version = "v0.10.1" // Version of ent codegen.
Sum = "h1:dM5h4Zk6yHGIgw4dCqVzGw3nWgpGYJiV4/kyHEF6PFo=" // Sum of ent codegen.
)
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package schema
import (
"time"
"entgo.io/ent"
"entgo.io/ent/dialect/entsql"
"entgo.io/ent/schema/field"
"entgo.io/ent/schema/index"
)
// ObCluster holds the schema definition for the ObCluster entity.
type ObCluster struct {
ent.Schema
}
// Fields of the ObCluster.
func (ObCluster) Fields() []ent.Field {
return []ent.Field{
field.Time("create_time").Default(time.Now),
field.Time("update_time").Default(time.Now).UpdateDefault(time.Now),
field.String("name"),
field.Int64("ob_cluster_id").Positive(),
field.String("type"),
field.String("rootservice_json").
Annotations(entsql.Annotation{
Size: 65536,
}),
}
}
func (ObCluster) Edges() []ent.Edge {
return nil
}
func (ObCluster) Indexes() []ent.Index {
return []ent.Index{
index.Fields("update_time"),
index.Fields("name", "ob_cluster_id").Unique(),
}
}
此差异已折叠。
## log config
log:
level: info
filename: ./log/ob-configserver.log
maxsize: 30
maxage: 7
maxbackups: 10
localtime: true
compress: true
## server config
server:
address: "0.0.0.0:8080"
run_dir: run
## vip config, configserver will generate url with vip address and port and return it to the client
## if you don't hava a vip, use the server address and port is ok, but do not use some random value that can't be connected
vip:
address: "127.0.0.1"
port: 8080
## storage config
storage:
## database type, support sqlite3 or mysql
database_type: mysql
# database_type: sqlite3
## database connection config, should match database_type above
connection_url: "user:password@tcp(127.0.0.1:3306)/oceanbase?parseTime=true"
# connection_url: "/tmp/data.db?cache=shared&_fk=1"
# connection_url: "file:ent?mode=memory&cache=shared&_fk=1"
module github.com/oceanbase/configserver
go 1.17
require (
entgo.io/ent v0.10.1
github.com/gin-contrib/pprof v1.3.0
github.com/gin-gonic/gin v1.7.7
github.com/go-sql-driver/mysql v1.6.0
github.com/gwatts/gin-adapter v0.0.0-20170508204228-c44433c485ad
github.com/mattn/go-isatty v0.0.14
github.com/mattn/go-sqlite3 v1.14.10
github.com/pkg/errors v0.9.1
github.com/sirupsen/logrus v1.8.1
github.com/smartystreets/goconvey v1.7.2
github.com/spf13/cobra v1.4.0
github.com/spf13/viper v1.10.1
github.com/stretchr/testify v1.7.1-0.20210427113832-6241f9ab9942
gopkg.in/natefinch/lumberjack.v2 v2.0.0
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
)
require (
ariga.io/atlas v0.3.7-0.20220303204946-787354f533c3 // indirect
github.com/BurntSushi/toml v1.1.0 // indirect
github.com/agext/levenshtein v1.2.1 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-openapi/inflect v0.19.0 // indirect
github.com/go-playground/locales v0.13.0 // indirect
github.com/go-playground/universal-translator v0.17.0 // indirect
github.com/go-playground/validator/v10 v10.4.1 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/hashicorp/hcl/v2 v2.10.0 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/jtolds/gls v4.20.0+incompatible // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.2.0 // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 // indirect
github.com/mitchellh/mapstructure v1.4.3 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/smartystreets/assertions v1.2.0 // indirect
github.com/spf13/afero v1.6.0 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/ugorji/go/codec v1.1.7 // indirect
github.com/zclconf/go-cty v1.8.0 // indirect
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 // indirect
golang.org/x/mod v0.5.1 // indirect
golang.org/x/sys v0.0.0-20211210111614-af8b64212486 // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/ini.v1 v1.66.2 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)
此差异已折叠。
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package codec
import (
"bytes"
"encoding/json"
)
func MarshalToJsonString(t interface{}) (string, error) {
bf := bytes.NewBufferString("")
jsonEncoder := json.NewEncoder(bf)
jsonEncoder.SetEscapeHTML(false)
err := jsonEncoder.Encode(t)
return bf.String(), err
}
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package codec
import (
"github.com/stretchr/testify/require"
"testing"
)
type Service struct {
Address string `json:"address"`
}
func TestMarshalToJsonString(t *testing.T) {
service := &Service{
Address: "http://helloworld.com/services?a=1&b=2",
}
jsonStr, _ := MarshalToJsonString(service)
require.Equal(t, "{\"address\":\"http://helloworld.com/services?a=1&b=2\"}\n", jsonStr)
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/**
* Copyright (c) 2021 OceanBase
* OceanBase CE is licensed under Mulan PubL v2.
* You can use this software according to the terms and conditions of the Mulan PubL v2.
* You may obtain a copy of Mulan PubL v2 at:
* http://license.coscl.org.cn/MulanPubL-2.0
* THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
* EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT,
* MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE.
* See the Mulan PubL v2 for more details.
*/
package model
type ObClusterIdcRegionInfo struct {
Cluster string `json:"ObRegion"`
ClusterId int64 `json:"ObRegionId"`
IdcList []*IdcRegionInfo `json:"IDCList"`
ReadonlyRsList string `json:"ReadonlyRsList"`
}
type IdcRegionInfo struct {
Idc string `json:"idc"`
Region string `json:"region"`
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册