提交 dd5d4a5e 编写于 作者: J Jin Hai 提交者: GitHub

Merge branch '0.5.1' into master

Former-commit-id: 4187de60052a91a0b20b28d362cf6a986572c74a
......@@ -5,10 +5,33 @@ Please mark all change in change log and use the ticket from JIRA.
# Milvus 0.5.1 (TODO)
## Bug
- \#134 - JFrog cache error
- \#161 - Search IVFSQHybrid crash on gpu
- \#169 - IVF_FLAT search out of memory
## Feature
- \#90 - The server start error messages could be improved to enhance user experience
- \#104 - test_scheduler core dump
- \#115 - Using new structure for tasktable
- \#139 - New config option use_gpu_threshold
- \#146 - Add only GPU and only CPU version for IVF_SQ8 and IVF_FLAT
- \#164 - Add CPU version for building index
## Improvement
- \#64 - Improvement dump function in scheduler
- \#80 - Print version information into log during server start
- \#82 - Move easyloggingpp into "external" directory
- \#92 - Speed up CMake build process
- \#96 - Remove .a file in milvus/lib for docker-version
- \#118 - Using shared_ptr instead of weak_ptr to avoid performance loss
- \#122 - Add unique id for Job
- \#130 - Set task state MOVED after resource copy it completed
- \#149 - Improve large query optimizer pass
- \#156 - Not return error when search_resources and index_build_device set cpu
- \#159 - Change the configuration name from 'use_gpu_threshold' to 'gpu_search_threshold'
- \#168 - Improve result reduce
- \#175 - add invalid config unittest
## Feature
## Task
# Milvus 0.5.0 (2019-10-21)
......
......@@ -28,7 +28,7 @@ Contributions to Milvus fall into the following categories.
If you have improvements to Milvus, send us your pull requests! For those just getting started, see [GitHub workflow](#github-workflow).
The Milvus team members will review your pull requests, and once it is accepted, it will be given a `ready to merge` label. This means we are working on submitting your pull request to the internal repository. After the change has been submitted internally, your pull request will be merged automatically on GitHub.
The Milvus team members will review your pull requests, and once it is accepted, the status of the projects to which it is associated will be changed to **Reviewer approved**. This means we are working on submitting your pull request to the internal repository. After the change has been submitted internally, your pull request will be merged automatically on GitHub.
### GitHub workflow
......
......@@ -25,6 +25,6 @@
| libunwind | [MIT](https://github.com/libunwind/libunwind/blob/master/LICENSE) |
| gperftools | [BSD 3-Clause](https://github.com/gperftools/gperftools/blob/master/COPYING) |
| grpc | [Apache 2.0](https://github.com/grpc/grpc/blob/master/LICENSE) |
| EASYLOGGINGPP | [MIT](https://github.com/zuhd-org/easyloggingpp/blob/master/LICENSEhttps://github.com/zuhd-org/easyloggingpp/blob/master/LICENSE) |
| EASYLOGGINGPP | [MIT](https://github.com/zuhd-org/easyloggingpp/blob/master/LICENSE) |
| Json | [MIT](https://github.com/nlohmann/json/blob/develop/LICENSE.MIT) |
![Milvuslogo](https://github.com/milvus-io/docs/blob/master/assets/milvus_logo.png)
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.0-orange)
![Release_date](https://img.shields.io/badge/release_date-October-yellowgreen)
- [Slack Community](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
- [Twitter](https://twitter.com/milvusio)
- [Facebook](https://www.facebook.com/io.milvus.5)
- [Blog](https://www.milvus.io/blog/)
- [CSDN](https://zilliz.blog.csdn.net/)
- [中文官网](https://www.milvus.io/zh-CN/)
# Welcome to Milvus
[中文版](README_CN.md)
## What is Milvus
Milvus is the world's fastest similarity search engine for massive feature vectors. Designed with heterogeneous computing architecture for the best cost efficiency. Searches over billion-scale vectors take only milliseconds with minimum computing resources.
Milvus provides stable Python, Java and C++ APIs.
Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://milvus.io/docs/en/releases/v0.5.0/).
- Heterogeneous computing
Milvus is designed with heterogeneous computing architecture for the best performance and cost efficiency.
- Multiple indexes
Milvus supports a variety of indexing types that employs quantization, tree-based, and graph indexing techniques.
- Intelligent resource management
Milvus automatically adapts search computation and index building processes based on your datasets and available resources.
- Horizontal scalability
Milvus supports online / offline expansion to scale both storage and computation resources with simple commands.
- High availability
Milvus is integrated with Kubernetes framework so that all single point of failures could be avoided.
- High compatibility
Milvus is compatible with almost all deep learning models and major programming languages such as Python, Java and C++, etc.
Milvus is the world's fastest similarity search engine for massive feature vectors. Built with heterogeneous computing architecture for the best cost efficiency. Searches over billion-scale vectors take only milliseconds with minimum computing resources.
- Ease of use
For more detailed introduction of Milvus and its architecture, see [Milvus overview](https://www.milvus.io/docs/en/aboutmilvus/overview/).
Milvus can be easily installed in a few steps and enables you to exclusively focus on feature vectors.
Milvus provides stable [Python](https://pypi.org/project/pymilvus/), [Java](https://milvus-io.github.io/milvus-sdk-java/javadoc/io/milvus/client/package-summary.html) and C++ APIs.
- Visualized monitor
You can track system performance on Prometheus-based GUI monitor dashboards.
## Architecture
![Milvus_arch](https://github.com/milvus-io/docs/blob/master/assets/milvus_arch.png)
Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.0/).
## Get started
### Hardware Requirements
| Component | Recommended configuration |
| --------- | ----------------------------------- |
| CPU | Intel CPU Haswell or higher |
| GPU | NVIDIA Pascal series or higher |
| Memory | 8 GB or more (depends on data size) |
| Storage | SATA 3.0 SSD or higher |
### Install using docker
Use Docker to install Milvus is a breeze. See the [Milvus install guide](https://milvus.io/docs/en/userguide/install_milvus/) for details.
### Build from source
#### Software requirements
See the [Milvus install guide](https://www.milvus.io/docs/en/userguide/install_milvus/) for using Docker containers. To install Milvus from source code, see [build from source](install.md).
- Ubuntu 18.04 or higher
- CMake 3.14 or higher
- CUDA 10.0 or higher
- NVIDIA driver 418 or higher
#### Compilation
##### Step 1 Install dependencies
```shell
$ cd [Milvus sourcecode path]/core
./ubuntu_build_deps.sh
```
##### Step 2 Build
```shell
$ cd [Milvus sourcecode path]/core
$ ./build.sh -t Debug
or
$ ./build.sh -t Release
```
When the build is completed, all the stuff that you need in order to run Milvus will be installed under `[Milvus root path]/core/milvus`.
#### Launch Milvus server
```shell
$ cd [Milvus root path]/core/milvus
```
Add `lib/` directory to `LD_LIBRARY_PATH`
```
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/milvus/lib
```
Then start Milvus server:
```
$ cd scripts
$ ./start_server.sh
```
To stop Milvus server, run:
```shell
$ ./stop_server.sh
```
To edit Milvus settings in `conf/server_config.yaml` and `conf/log_config.conf`, please read [Milvus Configuration](https://github.com/milvus-io/docs/blob/master/reference/milvus_config.md).
To edit Milvus settings, read [Milvus configuration](https://www.milvus.io/docs/en/reference/milvus_config/).
### Try your first Milvus program
#### Run Python example code
Make sure [Python 3.5](https://www.python.org/downloads/) or higher is already installed and in use.
Install Milvus Python SDK.
```shell
# Install Milvus Python SDK
$ pip install pymilvus==0.2.3
```
Create a new file `example.py`, and add [Python example code](https://github.com/milvus-io/pymilvus/blob/master/examples/AdvancedExample.py) to it.
Run the example code.
```shell
# Run Milvus Python example
$ python3 example.py
```
Try running a program with Milvus using [Python](https://www.milvus.io/docs/en/userguide/example_code/) or [Java example code](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples).
#### Run C++ example code
To use C++ example code, use below command:
```shell
# Run Milvus C++ example
......@@ -160,41 +38,42 @@ $ python3 example.py
$ ./sdk_simple
```
#### Run Java example code
Make sure Java 8 or higher is already installed.
## Roadmap
Refer to [this link](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples) for the example code.
Please read our [roadmap](https://milvus.io/docs/en/roadmap/) for upcoming features.
## Contribution guidelines
Contributions are welcomed and greatly appreciated. If you want to contribute to Milvus, please read our [contribution guidelines](CONTRIBUTING.md). This project adheres to the [code of conduct](CODE_OF_CONDUCT.md) of Milvus. By participating, you are expected to uphold this code.
Contributions are welcomed and greatly appreciated. Please read our [contribution guidelines](CONTRIBUTING.md) for detailed contribution workflow. This project adheres to the [code of conduct](CODE_OF_CONDUCT.md) of Milvus. By participating, you are expected to uphold this code.
We use [GitHub issues](https://github.com/milvus-io/milvus/issues/new/choose) to track issues and bugs. For general questions and public discussions, please join our community.
## Join the Milvus community
## Join our community
To connect with other users and contributors, welcome to join our [slack channel](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk).
To connect with other users and contributors, welcome to join our [Slack channel](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk).
## Milvus Roadmap
## Thanks
Please read our [roadmap](https://milvus.io/docs/en/roadmap/) to learn about upcoming features.
We greatly appreciate the help of the following people.
## Resources
- [akihoni](https://github.com/akihoni) found a broken link and a small typo in the README file.
[Milvus official website](https://www.milvus.io)
## Resources
[Milvus docs](https://www.milvus.io/docs/en/userguide/install_milvus/)
- [Milvus.io](https://www.milvus.io)
[Milvus bootcamp](https://github.com/milvus-io/bootcamp)
- [Milvus bootcamp](https://github.com/milvus-io/bootcamp)
[Milvus blog](https://www.milvus.io/blog/)
- [Milvus Medium](https://medium.com/@milvusio)
[Milvus CSDN](https://zilliz.blog.csdn.net/)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
[Milvus roadmap](https://milvus.io/docs/en/roadmap/)
- [Milvus Twitter](https://twitter.com/milvusio)
- [Milvus Facebook](https://www.facebook.com/io.milvus.5)
## License
[Apache 2.0 license](LICENSE)
[Apache License 2.0](LICENSE)
![Milvuslogo](https://raw.githubusercontent.com/milvus-io/docs/master/assets/milvus_logo.png)
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.0-orange)
![Release_date](https://img.shields.io/badge/release_date-October-yellowgreen)
- [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
- [Twitter](https://twitter.com/milvusio)
- [Facebook](https://www.facebook.com/io.milvus.5)
- [博客](https://www.milvus.io/blog/)
- [CSDN](https://zilliz.blog.csdn.net/)
- [中文官网](https://www.milvus.io/zh-CN/)
# 欢迎来到 Milvus
## Milvus 是什么
Milvus 是一款开源的、针对海量特征向量的相似性搜索引擎。基于异构众核计算框架设计,成本更低,性能更好。在有限的计算资源下,十亿向量搜索仅毫秒响应。
Milvus 提供稳定的 Python、Java 以及 C++ 的 API 接口。
通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.0/) 获取最新发行版本的 Milvus。
- 异构众核
Milvus 基于异构众核计算框架设计,成本更低,性能更好。
- 多元化索引
Milvus 支持多种索引方式,使用量化索引、基于树的索引和图索引等算法。
- 资源智能管理
Milvus 根据实际数据规模和可利用资源,智能调节优化查询计算和索引构建过程。
- 水平扩容
Milvus 支持在线 / 离线扩容,仅需执行简单命令,便可弹性伸缩计算节点和存储节点。
- 高可用性
Milvus 集成了 Kubernetes 框架,能有效避免单点障碍情况的发生。
- 简单易用
Milvus 安装简单,使用方便,并可使您专注于特征向量。
- 可视化监控
您可以使用基于 Prometheus 的图形化监控,以便实时跟踪系统性能。
## 整体架构
![Milvus_arch](https://github.com/milvus-io/docs/blob/master/assets/milvus_arch.png)
## 开始使用 Milvus
### 硬件要求
| 硬件设备 | 推荐配置 |
| -------- | ------------------------------------- |
| CPU | Intel CPU Haswell 及以上 |
| GPU | NVIDIA Pascal 系列及以上 |
| 内存 | 8 GB 或以上(取决于具体向量数据规模) |
| 硬盘 | SATA 3.0 SSD 及以上 |
### 使用 Docker
您可以方便地使用 Docker 安装 Milvus。具体请查看 [Milvus 安装指南](https://milvus.io/docs/zh-CN/userguide/install_milvus/)
### 从源代码编译
#### 软件要求
- Ubuntu 18.04 及以上
- CMake 3.14 及以上
- CUDA 10.0 及以上
- NVIDIA driver 418 及以上
#### 编译
##### 第一步 安装依赖项
```shell
$ cd [Milvus sourcecode path]/core
$ ./ubuntu_build_deps.sh
```
##### 第二步 编译
```shell
$ cd [Milvus sourcecode path]/core
$ ./build.sh -t Debug
or
$ ./build.sh -t Release
```
当您成功编译后,所有 Milvus 必需组件将安装在`[Milvus root path]/core/milvus`路径下。
##### 启动 Milvus 服务
```shell
$ cd [Milvus root path]/core/milvus
```
`LD_LIBRARY_PATH` 中添加 `lib/` 目录:
```shell
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/milvus/lib
```
启动 Milvus 服务:
```shell
$ cd scripts
$ ./start_server.sh
```
若要停止 Milvus 服务,请使用如下命令:
```shell
$ ./stop_server.sh
```
若需要修改 Milvus 配置文件 `conf/server_config.yaml``conf/log_config.conf`,请查看 [Milvus 配置](https://milvus.io/docs/zh-CN/reference/milvus_config/)
### 开始您的第一个 Milvus 程序
#### 运行 Python 示例代码
请确保系统的 Python 版本为 [Python 3.5](https://www.python.org/downloads/) 或以上。
安装 Milvus Python SDK。
```shell
# Install Milvus Python SDK
$ pip install pymilvus==0.2.3
```
创建 `example.py` 文件,并向文件中加入 [Python 示例代码](https://github.com/milvus-io/pymilvus/blob/master/examples/advanced_example.py)
运行示例代码
```shell
# Run Milvus Python example
$ python3 example.py
```
#### 运行 C++ 示例代码
```shell
# Run Milvus C++ example
$ cd [Milvus root path]/core/milvus/bin
$ ./sdk_simple
```
#### 运行 Java 示例代码
请确保系统的 Java 版本为 Java 8 或以上。
请从[此处](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples)获取 Java 示例代码。
## 贡献者指南
我们由衷欢迎您推送贡献。关于贡献流程的详细信息,请参阅 [贡献者指南](https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md)。本项目遵循 Milvus [行为准则](https://github.com/milvus-io/milvus/blob/master/CODE_OF_CONDUCT.md)。如果您希望参与本项目,请遵守该准则的内容。
我们使用 [GitHub issues](https://github.com/milvus-io/milvus/issues/new/choose) 追踪问题和补丁。若您希望提出问题或进行讨论,请加入我们的社区。
## 加入 Milvus 社区
欢迎加入我们的 [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk) 以便与其他用户和贡献者进行交流。
## Milvus 路线图
请阅读我们的[路线图](https://milvus.io/docs/zh-CN/roadmap/)以获得更多即将开发的新功能。
## 相关链接
[Milvus 官方网站](https://www.milvus.io/)
[Milvus 文档](https://www.milvus.io/docs/en/userguide/install_milvus/)
[Milvus 在线训练营](https://github.com/milvus-io/bootcamp)
[Milvus 博客](https://www.milvus.io/blog/)
[Milvus CSDN](https://zilliz.blog.csdn.net/)
[Milvus 路线图](https://milvus.io/docs/en/roadmap/)
## 许可协议
[Apache 许可协议2.0版](https://github.com/milvus-io/milvus/blob/master/LICENSE)
#!/usr/bin/env groovy
String cron_timezone = "TZ=Asia/Shanghai"
String cron_string = BRANCH_NAME == "master" ? "H 0 * * * " : ""
cron_string = BRANCH_NAME == "0.5.1" ? "H 1 * * * " : cron_string
pipeline {
agent none
triggers {
cron """${cron_timezone}
${cron_string}"""
}
options {
timestamps()
}
parameters{
choice choices: ['Release', 'Debug'], description: '', name: 'BUILD_TYPE'
string defaultValue: 'cf1434e7-5a4b-4d25-82e8-88d667aef9e5', description: 'GIT CREDENTIALS ID', name: 'GIT_CREDENTIALS_ID', trim: true
string defaultValue: 'registry.zilliz.com', description: 'DOCKER REGISTRY URL', name: 'DOKCER_REGISTRY_URL', trim: true
string defaultValue: 'ba070c98-c8cc-4f7c-b657-897715f359fc', description: 'DOCKER CREDENTIALS ID', name: 'DOCKER_CREDENTIALS_ID', trim: true
string defaultValue: 'http://192.168.1.202/artifactory/milvus', description: 'JFROG ARTFACTORY URL', name: 'JFROG_ARTFACTORY_URL', trim: true
......@@ -47,7 +57,7 @@ pipeline {
steps {
container('milvus-build-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/build.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/build.groovy"
}
}
}
......@@ -56,7 +66,7 @@ pipeline {
steps {
container('milvus-build-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/coverage.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy"
}
}
}
......@@ -65,7 +75,7 @@ pipeline {
steps {
container('milvus-build-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/package.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/package.groovy"
}
}
}
......@@ -87,7 +97,7 @@ pipeline {
steps {
container('publish-images'){
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/publishImages.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/publishImages.groovy"
}
}
}
......@@ -109,7 +119,7 @@ pipeline {
steps {
container('milvus-test-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/deploySingle2Dev.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/deploySingle2Dev.groovy"
}
}
}
......@@ -119,7 +129,12 @@ pipeline {
steps {
container('milvus-test-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/singleDevTest.groovy"
boolean isNightlyTest = isTimeTriggeredBuild()
if (isNightlyTest) {
load "${env.WORKSPACE}/ci/jenkins/step/singleDevNightlyTest.groovy"
} else {
load "${env.WORKSPACE}/ci/jenkins/step/singleDevTest.groovy"
}
}
}
}
......@@ -129,7 +144,7 @@ pipeline {
steps {
container('milvus-test-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/cleanupSingleDev.groovy"
}
}
}
......@@ -139,7 +154,7 @@ pipeline {
unsuccessful {
container('milvus-test-env') {
script {
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
load "${env.WORKSPACE}/ci/jenkins/step/cleanupSingleDev.groovy"
}
}
}
......@@ -150,3 +165,9 @@ pipeline {
}
}
boolean isTimeTriggeredBuild() {
if (currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause').size() != 0) {
return true
}
return false
}
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo update'
dir ('milvus-helm') {
checkout([$class: 'GitSCM', branches: [[name: "0.5.0"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_CREDENTIALS_ID}", url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.5.0:refs/remotes/origin/0.5.0"]]])
dir ("milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu -f ci/values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
}
}
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu"
throw exc
}
......@@ -8,7 +8,7 @@ metadata:
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.5.0-ubuntu18.04
image: registry.zilliz.com/milvus/milvus-build-env:v0.5.1-ubuntu18.04
env:
- name: POD_IP
valueFrom:
......
......@@ -109,6 +109,7 @@ for test in `ls ${DIR_UNITTEST}`; do
if [ $? -ne 0 ]; then
echo ${args}
echo ${DIR_UNITTEST}/${test} "run failed"
exit -1
fi
done
......@@ -131,8 +132,13 @@ ${LCOV_CMD} -r "${FILE_INFO_OUTPUT}" -o "${FILE_INFO_OUTPUT_NEW}" \
"*/src/server/Server.cpp" \
"*/src/server/DBWrapper.cpp" \
"*/src/server/grpc_impl/GrpcServer.cpp" \
"*/src/utils/easylogging++.h" \
"*/src/utils/easylogging++.cc"
"*/src/external/easyloggingpp/easylogging++.h" \
"*/src/external/easyloggingpp/easylogging++.cc"
if [ $? -ne 0 ]; then
echo "gen ${FILE_INFO_OUTPUT_NEW} failed"
exit -2
fi
# gen html report
# ${LCOV_GEN_CMD} "${FILE_INFO_OUTPUT_NEW}" --output-directory ${DIR_LCOV_OUTPUT}/
try {
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu"
def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu", returnStatus: true
if (!helmResult) {
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu"
}
} catch (exc) {
def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu", returnStatus: true
if (!helmResult) {
......
timeout(time: 60, unit: 'MINUTES') {
timeout(time: 30, unit: 'MINUTES') {
dir ("ci/jenkins/scripts") {
sh "./coverage.sh -o /opt/milvus -u root -p 123456 -t \$POD_IP"
// Set some env variables so codecov detection script works correctly
......
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo update'
dir ('milvus-helm') {
checkout([$class: 'GitSCM', branches: [[name: "0.5.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.5.0:refs/remotes/origin/0.5.0"]]])
dir ("milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu -f ci/db_backend/sqlite_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
}
}
timeout(time: 30, unit: 'MINUTES') {
timeout(time: 90, unit: 'MINUTES') {
dir ("tests/milvus_python_test") {
sh 'python3 -m pip install -r requirements.txt'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu-milvus-gpu-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu-milvus-gpu-engine.milvus.svc.cluster.local"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "0.5.0"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_CREDENTIALS_ID}", url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.5.0:refs/remotes/origin/0.5.0"]]])
checkout([$class: 'GitSCM', branches: [[name: "0.5.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.5.0:refs/remotes/origin/0.5.0"]]])
}
}
dir ("milvus-helm") {
......@@ -17,6 +17,6 @@ timeout(time: 30, unit: 'MINUTES') {
}
}
dir ("tests/milvus_python_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu-milvus-gpu-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu-milvus-gpu-engine.milvus.svc.cluster.local"
}
}
timeout(time: 60, unit: 'MINUTES') {
dir ("tests/milvus_python_test") {
sh 'python3 -m pip install -r requirements.txt'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu-milvus-gpu-engine.milvus.svc.cluster.local"
}
// mysql database backend test
// load "${env.WORKSPACE}/ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
// Remove mysql-version tests: 10-28
// if (!fileExists('milvus-helm')) {
// dir ("milvus-helm") {
// checkout([$class: 'GitSCM', branches: [[name: "0.5.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.5.0:refs/remotes/origin/0.5.0"]]])
// }
// }
// dir ("milvus-helm") {
// dir ("milvus-gpu") {
// sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu -f ci/db_backend/mysql_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
// }
// }
// dir ("tests/milvus_python_test") {
// sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-gpu-milvus-gpu-engine.milvus.svc.cluster.local"
// }
}
......@@ -3,7 +3,7 @@ timeout(time: 30, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt -i http://pypi.douban.com/simple --trusted-host pypi.douban.com'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local --internal=true"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
......@@ -19,7 +19,7 @@ timeout(time: 30, unit: 'MINUTES') {
}
}
dir ("${PROJECT_NAME}_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local --internal=true"
}
} catch (exc) {
echo 'Milvus Test Failed !'
......
......@@ -3,7 +3,7 @@ timeout(time: 60, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt -i http://pypi.douban.com/simple --trusted-host pypi.douban.com'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local --internal=true"
}
// mysql database backend test
......@@ -20,7 +20,7 @@ timeout(time: 60, unit: 'MINUTES') {
}
}
dir ("${PROJECT_NAME}_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local --internal=true"
}
} catch (exc) {
echo 'Milvus Test Failed !'
......
......@@ -14,7 +14,7 @@ container('milvus-build-env') {
sh "export JFROG_ARTFACTORY_URL='${params.JFROG_ARTFACTORY_URL}' \
&& export JFROG_USER_NAME='${USERNAME}' \
&& export JFROG_PASSWORD='${PASSWORD}' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.2.1/faiss-branch-0.2.1.tar.gz' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.3.0/faiss-branch-0.3.0.tar.gz' \
&& ./build.sh -t ${params.BUILD_TYPE} -d /opt/milvus -j -u -c"
sh "./coverage.sh -u root -p 123456 -t \$POD_IP"
......
......@@ -14,7 +14,7 @@ container('milvus-build-env') {
sh "export JFROG_ARTFACTORY_URL='${params.JFROG_ARTFACTORY_URL}' \
&& export JFROG_USER_NAME='${USERNAME}' \
&& export JFROG_PASSWORD='${PASSWORD}' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.2.1/faiss-branch-0.2.1.tar.gz' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.3.0/faiss-branch-0.3.0.tar.gz' \
&& ./build.sh -t ${params.BUILD_TYPE} -j -d /opt/milvus"
}
}
......
......@@ -32,15 +32,19 @@ string(REGEX REPLACE "\n" "" BUILD_TIME ${BUILD_TIME})
message(STATUS "Build time = ${BUILD_TIME}")
MACRO (GET_GIT_BRANCH_NAME GIT_BRANCH_NAME)
execute_process(COMMAND "git" symbolic-ref --short HEAD OUTPUT_VARIABLE ${GIT_BRANCH_NAME})
execute_process(COMMAND "git" rev-parse --abbrev-ref HEAD OUTPUT_VARIABLE ${GIT_BRANCH_NAME})
if(GIT_BRANCH_NAME STREQUAL "")
execute_process(COMMAND "git" symbolic-ref --short -q HEAD OUTPUT_VARIABLE ${GIT_BRANCH_NAME})
endif()
ENDMACRO (GET_GIT_BRANCH_NAME)
GET_GIT_BRANCH_NAME(GIT_BRANCH_NAME)
message(STATUS "GIT_BRANCH_NAME = ${GIT_BRANCH_NAME}")
if(NOT GIT_BRANCH_NAME STREQUAL "")
string(REGEX REPLACE "\n" "" GIT_BRANCH_NAME ${GIT_BRANCH_NAME})
endif()
set(MILVUS_VERSION "${GIT_BRANCH_NAME}")
set(MILVUS_VERSION "0.5.1")
string(REGEX MATCH "[0-9]+\\.[0-9]+\\.[0-9]" MILVUS_VERSION "${MILVUS_VERSION}")
find_package(ClangTools)
......@@ -71,7 +75,7 @@ if(MILVUS_VERSION_MAJOR STREQUAL ""
endif()
message(STATUS "Build version = ${MILVUS_VERSION}")
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/version.h.macro ${CMAKE_CURRENT_SOURCE_DIR}/version.h)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/src/version.h.macro ${CMAKE_CURRENT_SOURCE_DIR}/src/version.h)
message(STATUS "Milvus version: "
"${MILVUS_VERSION_MAJOR}.${MILVUS_VERSION_MINOR}.${MILVUS_VERSION_PATCH} "
......
......@@ -6,4 +6,5 @@
*easylogging++*
*SqliteMetaImpl.cpp
*src/grpc*
*src/external*
*milvus/include*
\ No newline at end of file
......@@ -55,21 +55,10 @@ define_option_string(MILVUS_DEPENDENCY_SOURCE
define_option(MILVUS_VERBOSE_THIRDPARTY_BUILD
"Show output from ExternalProjects rather than just logging to files" ON)
define_option(MILVUS_BOOST_VENDORED "Use vendored Boost instead of existing Boost. \
Note that this requires linking Boost statically" OFF)
define_option(MILVUS_BOOST_HEADER_ONLY "Use only BOOST headers" OFF)
define_option(MILVUS_WITH_BZ2 "Build with BZ2 compression" ON)
define_option(MILVUS_WITH_EASYLOGGINGPP "Build with Easylogging++ library" ON)
define_option(MILVUS_WITH_LZ4 "Build with lz4 compression" ON)
define_option(MILVUS_WITH_PROMETHEUS "Build with PROMETHEUS library" ON)
define_option(MILVUS_WITH_SNAPPY "Build with Snappy compression" ON)
define_option(MILVUS_WITH_SQLITE "Build with SQLite library" ON)
define_option(MILVUS_WITH_SQLITE_ORM "Build with SQLite ORM library" ON)
......@@ -78,16 +67,6 @@ define_option(MILVUS_WITH_MYSQLPP "Build with MySQL++" ON)
define_option(MILVUS_WITH_YAMLCPP "Build with yaml-cpp library" ON)
define_option(MILVUS_WITH_ZLIB "Build with zlib compression" ON)
if(CMAKE_VERSION VERSION_LESS 3.7)
set(MILVUS_WITH_ZSTD_DEFAULT OFF)
else()
# ExternalProject_Add(SOURCE_SUBDIR) is available since CMake 3.7.
set(MILVUS_WITH_ZSTD_DEFAULT ON)
endif()
define_option(MILVUS_WITH_ZSTD "Build with zstd compression" ${MILVUS_WITH_ZSTD_DEFAULT})
if (MILVUS_ENABLE_PROFILING STREQUAL "ON")
define_option(MILVUS_WITH_LIBUNWIND "Build with libunwind" ON)
define_option(MILVUS_WITH_GPERFTOOLS "Build with gperftools" ON)
......@@ -95,6 +74,8 @@ endif()
define_option(MILVUS_WITH_GRPC "Build with GRPC" ON)
define_option(MILVUS_WITH_ZLIB "Build with zlib compression" ON)
#----------------------------------------------------------------------
if(MSVC)
set_option_category("MSVC")
......
此差异已折叠。
......@@ -2,9 +2,9 @@
server_config:
address: 0.0.0.0 # milvus server ip address (IPv4)
port: 19530 # port range: 1025 ~ 65534
port: 19530 # milvus server port, must in range [1025, 65534]
deploy_mode: single # deployment type: single, cluster_readonly, cluster_writable
time_zone: UTC+8
time_zone: UTC+8 # time zone, must be in format: UTC+X
db_config:
primary_path: @MILVUS_DB_PATH@ # path used to store data and meta
......@@ -14,30 +14,31 @@ db_config:
# Keep 'dialect://:@:/', and replace other texts with real values
# Replace 'dialect' with 'mysql' or 'sqlite'
insert_buffer_size: 4 # GB, maximum insert buffer size allowed
insert_buffer_size: 4 # GB, maximum insert buffer size allowed, must be a positive integer
# sum of insert_buffer_size and cpu_cache_capacity cannot exceed total memory
preload_table: # preload data at startup, '*' means load all tables, empty value means no preload
# you can specify preload tables like this: table1,table2,table3
metric_config:
enable_monitor: false # enable monitoring or not
enable_monitor: false # enable monitoring or not, must be a boolean
collector: prometheus # prometheus
prometheus_config:
port: 8080 # port prometheus uses to fetch metrics
port: 8080 # port prometheus uses to fetch metrics, must in range [1025, 65534]
cache_config:
cpu_cache_capacity: 16 # GB, CPU memory used for cache
cpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
gpu_cache_capacity: 4 # GB, GPU memory used for cache
gpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
cache_insert_data: false # whether to load inserted data into cache
cpu_cache_capacity: 16 # GB, CPU memory used for cache, must be a positive integer
cpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered, must be in range (0.0, 1.0]
gpu_cache_capacity: 4 # GB, GPU memory used for cache, must be a positive integer
gpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered, must be in range (0.0, 1.0]
cache_insert_data: false # whether to load inserted data into cache, must be a boolean
engine_config:
use_blas_threshold: 20 # if nq < use_blas_threshold, use SSE, faster with fluctuated response times
# if nq >= use_blas_threshold, use OpenBlas, slower with stable response times
gpu_search_threshold: 1000 # threshold beyond which the search computation is executed on GPUs only
resource_config:
search_resources: # define the GPUs used for search computation, valid value: gpux
search_resources: # define the GPUs used for search computation, must be in format: gpux
- gpu0
index_build_device: gpu0 # GPU used for building index
\ No newline at end of file
index_build_device: gpu0 # GPU used for building index, must be in format: gpux
\ No newline at end of file
......@@ -99,6 +99,7 @@ for test in `ls ${DIR_UNITTEST}`; do
if [ $? -ne 0 ]; then
echo ${args}
echo ${DIR_UNITTEST}/${test} "run failed"
exit -1
fi
done
......@@ -121,8 +122,13 @@ ${LCOV_CMD} -r "${FILE_INFO_OUTPUT}" -o "${FILE_INFO_OUTPUT_NEW}" \
"*/src/server/Server.cpp" \
"*/src/server/DBWrapper.cpp" \
"*/src/server/grpc_impl/GrpcServer.cpp" \
"*/src/utils/easylogging++.h" \
"*/src/utils/easylogging++.cc"
"*/easylogging++.h" \
"*/easylogging++.cc" \
"*/src/external/*"
if [ $? -ne 0 ]; then
echo "generate ${FILE_INFO_OUTPUT_NEW} failed"
exit -2
fi
# gen html report
${LCOV_GEN_CMD} "${FILE_INFO_OUTPUT_NEW}" --output-directory ${DIR_LCOV_OUTPUT}/
......@@ -64,6 +64,13 @@ set(scheduler_files
${scheduler_task_files}
)
aux_source_directory(${MILVUS_ENGINE_SRC}/external/easyloggingpp external_easyloggingpp_files)
aux_source_directory(${MILVUS_ENGINE_SRC}/external/nlohmann external_nlohmann_files)
set(external_files
${external_easyloggingpp_files}
${external_nlohmann_files}
)
aux_source_directory(${MILVUS_ENGINE_SRC}/server server_files)
aux_source_directory(${MILVUS_ENGINE_SRC}/server/grpc_impl grpc_server_files)
aux_source_directory(${MILVUS_ENGINE_SRC}/utils utils_files)
......@@ -77,6 +84,7 @@ set(engine_files
${db_insert_files}
${db_meta_files}
${metrics_files}
${external_files}
${utils_files}
${wrapper_files}
)
......@@ -112,14 +120,10 @@ set(third_party_libs
${client_grpc_lib}
yaml-cpp
${prometheus_lib}
${boost_lib}
bzip2
lz4
snappy
zlib
zstd
${cuda_lib}
mysqlpp
zlib
${boost_lib}
)
if (MILVUS_ENABLE_PROFILING STREQUAL "ON")
......
......@@ -67,15 +67,16 @@ class DB {
virtual Status
Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t nprobe, const float* vectors,
QueryResults& results) = 0;
ResultIds& result_ids, ResultDistances& result_distances) = 0;
virtual Status
Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t nprobe, const float* vectors,
const meta::DatesT& dates, QueryResults& results) = 0;
const meta::DatesT& dates, ResultIds& result_ids, ResultDistances& result_distances) = 0;
virtual Status
Query(const std::string& table_id, const std::vector<std::string>& file_ids, uint64_t k, uint64_t nq,
uint64_t nprobe, const float* vectors, const meta::DatesT& dates, QueryResults& results) = 0;
uint64_t nprobe, const float* vectors, const meta::DatesT& dates, ResultIds& result_ids,
ResultDistances& result_distances) = 0;
virtual Status
Size(uint64_t& result) = 0;
......
......@@ -136,7 +136,7 @@ DBImpl::DeleteTable(const std::string& table_id, const meta::DatesT& dates) {
// scheduler will determine when to delete table files
auto nres = scheduler::ResMgrInst::GetInstance()->GetNumOfComputeResource();
scheduler::DeleteJobPtr job = std::make_shared<scheduler::DeleteJob>(0, table_id, meta_ptr_, nres);
scheduler::DeleteJobPtr job = std::make_shared<scheduler::DeleteJob>(table_id, meta_ptr_, nres);
scheduler::JobMgrInst::GetInstance()->Put(job);
job->WaitAndDelete();
} else {
......@@ -336,20 +336,20 @@ DBImpl::DropIndex(const std::string& table_id) {
Status
DBImpl::Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t nprobe, const float* vectors,
QueryResults& results) {
ResultIds& result_ids, ResultDistances& result_distances) {
if (shutting_down_.load(std::memory_order_acquire)) {
return Status(DB_ERROR, "Milsvus server is shutdown!");
}
meta::DatesT dates = {utils::GetDate()};
Status result = Query(table_id, k, nq, nprobe, vectors, dates, results);
Status result = Query(table_id, k, nq, nprobe, vectors, dates, result_ids, result_distances);
return result;
}
Status
DBImpl::Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t nprobe, const float* vectors,
const meta::DatesT& dates, QueryResults& results) {
const meta::DatesT& dates, ResultIds& result_ids, ResultDistances& result_distances) {
if (shutting_down_.load(std::memory_order_acquire)) {
return Status(DB_ERROR, "Milsvus server is shutdown!");
}
......@@ -372,14 +372,15 @@ DBImpl::Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t npr
}
cache::CpuCacheMgr::GetInstance()->PrintInfo(); // print cache info before query
status = QueryAsync(table_id, file_id_array, k, nq, nprobe, vectors, results);
status = QueryAsync(table_id, file_id_array, k, nq, nprobe, vectors, result_ids, result_distances);
cache::CpuCacheMgr::GetInstance()->PrintInfo(); // print cache info after query
return status;
}
Status
DBImpl::Query(const std::string& table_id, const std::vector<std::string>& file_ids, uint64_t k, uint64_t nq,
uint64_t nprobe, const float* vectors, const meta::DatesT& dates, QueryResults& results) {
uint64_t nprobe, const float* vectors, const meta::DatesT& dates, ResultIds& result_ids,
ResultDistances& result_distances) {
if (shutting_down_.load(std::memory_order_acquire)) {
return Status(DB_ERROR, "Milsvus server is shutdown!");
}
......@@ -413,7 +414,7 @@ DBImpl::Query(const std::string& table_id, const std::vector<std::string>& file_
}
cache::CpuCacheMgr::GetInstance()->PrintInfo(); // print cache info before query
status = QueryAsync(table_id, file_id_array, k, nq, nprobe, vectors, results);
status = QueryAsync(table_id, file_id_array, k, nq, nprobe, vectors, result_ids, result_distances);
cache::CpuCacheMgr::GetInstance()->PrintInfo(); // print cache info after query
return status;
}
......@@ -432,14 +433,14 @@ DBImpl::Size(uint64_t& result) {
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Status
DBImpl::QueryAsync(const std::string& table_id, const meta::TableFilesSchema& files, uint64_t k, uint64_t nq,
uint64_t nprobe, const float* vectors, QueryResults& results) {
uint64_t nprobe, const float* vectors, ResultIds& result_ids, ResultDistances& result_distances) {
server::CollectQueryMetrics metrics(nq);
TimeRecorder rc("");
// step 1: get files to search
ENGINE_LOG_DEBUG << "Engine query begin, index file count: " << files.size();
scheduler::SearchJobPtr job = std::make_shared<scheduler::SearchJob>(0, k, nq, nprobe, vectors);
scheduler::SearchJobPtr job = std::make_shared<scheduler::SearchJob>(k, nq, nprobe, vectors);
for (auto& file : files) {
scheduler::TableFileSchemaPtr file_ptr = std::make_shared<meta::TableFileSchema>(file);
job->AddIndexFile(file_ptr);
......@@ -453,7 +454,8 @@ DBImpl::QueryAsync(const std::string& table_id, const meta::TableFilesSchema& fi
}
// step 3: construct results
results = job->GetResult();
result_ids = job->GetResultIds();
result_distances = job->GetResultDistances();
rc.ElapseFromBegin("Engine query totally cost");
return Status::OK();
......@@ -754,7 +756,7 @@ DBImpl::BackgroundBuildIndex() {
Status status;
if (!to_index_files.empty()) {
scheduler::BuildIndexJobPtr job = std::make_shared<scheduler::BuildIndexJob>(0, meta_ptr_, options_);
scheduler::BuildIndexJobPtr job = std::make_shared<scheduler::BuildIndexJob>(meta_ptr_, options_);
// step 2: put build index task to scheduler
for (auto& file : to_index_files) {
......
......@@ -91,15 +91,16 @@ class DBImpl : public DB {
Status
Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t nprobe, const float* vectors,
QueryResults& results) override;
ResultIds& result_ids, ResultDistances& result_distances) override;
Status
Query(const std::string& table_id, uint64_t k, uint64_t nq, uint64_t nprobe, const float* vectors,
const meta::DatesT& dates, QueryResults& results) override;
const meta::DatesT& dates, ResultIds& result_ids, ResultDistances& result_distances) override;
Status
Query(const std::string& table_id, const std::vector<std::string>& file_ids, uint64_t k, uint64_t nq,
uint64_t nprobe, const float* vectors, const meta::DatesT& dates, QueryResults& results) override;
uint64_t nprobe, const float* vectors, const meta::DatesT& dates, ResultIds& result_ids,
ResultDistances& result_distances) override;
Status
Size(uint64_t& result) override;
......@@ -107,7 +108,7 @@ class DBImpl : public DB {
private:
Status
QueryAsync(const std::string& table_id, const meta::TableFilesSchema& files, uint64_t k, uint64_t nq,
uint64_t nprobe, const float* vectors, QueryResults& results);
uint64_t nprobe, const float* vectors, ResultIds& result_ids, ResultDistances& result_distances);
void
BackgroundTimerTask();
......
......@@ -19,6 +19,7 @@
#include "db/engine/ExecutionEngine.h"
#include <faiss/Index.h>
#include <stdint.h>
#include <utility>
#include <vector>
......@@ -26,12 +27,13 @@
namespace milvus {
namespace engine {
typedef int64_t IDNumber;
using IDNumber = faiss::Index::idx_t;
typedef IDNumber* IDNumberPtr;
typedef std::vector<IDNumber> IDNumbers;
typedef std::vector<std::pair<IDNumber, double>> QueryResult;
typedef std::vector<QueryResult> QueryResults;
typedef std::vector<faiss::Index::idx_t> ResultIds;
typedef std::vector<faiss::Index::distance_t> ResultDistances;
struct TableIndex {
int32_t engine_type_ = (int)EngineType::FAISS_IDMAP;
......
......@@ -18,16 +18,13 @@
#include "db/engine/ExecutionEngineImpl.h"
#include "cache/CpuCacheMgr.h"
#include "cache/GpuCacheMgr.h"
#include "knowhere/common/Config.h"
#include "metrics/Metrics.h"
#include "scheduler/Utils.h"
#include "server/Config.h"
#include "utils/CommonUtil.h"
#include "utils/Exception.h"
#include "utils/Log.h"
#include "knowhere/common/Config.h"
#include "knowhere/common/Exception.h"
#include "knowhere/index/vector_index/IndexIVFSQHybrid.h"
#include "scheduler/Utils.h"
#include "server/Config.h"
#include "wrapper/ConfAdapter.h"
#include "wrapper/ConfAdapterMgr.h"
#include "wrapper/VecImpl.h"
......@@ -259,33 +256,66 @@ ExecutionEngineImpl::Load(bool to_cache) {
Status
ExecutionEngineImpl::CopyToGpu(uint64_t device_id, bool hybrid) {
#if 0
if (hybrid) {
return Status::OK();
}
const std::string key = location_ + ".quantizer";
std::vector<uint64_t> gpus{device_id};
auto index = std::static_pointer_cast<VecIndex>(cache::GpuCacheMgr::GetInstance(device_id)->GetIndex(location_));
bool already_in_cache = (index != nullptr);
if (already_in_cache) {
index_ = index;
} else {
if (index_ == nullptr) {
ENGINE_LOG_ERROR << "ExecutionEngineImpl: index is null, failed to copy to gpu";
return Status(DB_ERROR, "index is null");
const int64_t NOT_FOUND = -1;
int64_t device_id = NOT_FOUND;
// cache hit
{
knowhere::QuantizerPtr quantizer = nullptr;
for (auto& gpu : gpus) {
auto cache = cache::GpuCacheMgr::GetInstance(gpu);
if (auto cached_quantizer = cache->GetIndex(key)) {
device_id = gpu;
quantizer = std::static_pointer_cast<CachedQuantizer>(cached_quantizer)->Data();
}
}
if (device_id != NOT_FOUND) {
// cache hit
auto config = std::make_shared<knowhere::QuantizerCfg>();
config->gpu_id = device_id;
config->mode = 2;
auto new_index = index_->LoadData(quantizer, config);
index_ = new_index;
}
}
try {
index_ = index_->CopyToGpu(device_id);
ENGINE_LOG_DEBUG << "CPU to GPU" << device_id;
} catch (std::exception& e) {
ENGINE_LOG_ERROR << e.what();
return Status(DB_ERROR, e.what());
if (device_id == NOT_FOUND) {
// cache miss
std::vector<int64_t> all_free_mem;
for (auto& gpu : gpus) {
auto cache = cache::GpuCacheMgr::GetInstance(gpu);
auto free_mem = cache->CacheCapacity() - cache->CacheUsage();
all_free_mem.push_back(free_mem);
}
auto max_e = std::max_element(all_free_mem.begin(), all_free_mem.end());
auto best_index = std::distance(all_free_mem.begin(), max_e);
device_id = gpus[best_index];
auto pair = index_->CopyToGpuWithQuantizer(device_id);
index_ = pair.first;
// cache
auto cached_quantizer = std::make_shared<CachedQuantizer>(pair.second);
cache::GpuCacheMgr::GetInstance(device_id)->InsertItem(key, cached_quantizer);
}
return Status::OK();
}
if (!already_in_cache) {
GpuCache(device_id);
#endif
try {
index_ = index_->CopyToGpu(device_id);
ENGINE_LOG_DEBUG << "CPU to GPU" << device_id;
} catch (std::exception& e) {
ENGINE_LOG_ERROR << e.what();
return Status(DB_ERROR, e.what());
}
return Status::OK();
}
......
......@@ -244,11 +244,14 @@ if(CUSTOMIZATION)
# set(FAISS_MD5 "21deb1c708490ca40ecb899122c01403") # commit-id 643e48f479637fd947e7b93fa4ca72b38ecc9a39 branch-0.2.0
# set(FAISS_MD5 "072db398351cca6e88f52d743bbb9fa0") # commit-id 3a2344d04744166af41ef1a74449d68a315bfe17 branch-0.2.1
# set(FAISS_MD5 "c89ea8e655f5cdf58f42486f13614714") # commit-id 9c28a1cbb88f41fa03b03d7204106201ad33276b branch-0.2.1
set(FAISS_MD5 "87fdd86351ffcaf3f80dc26ade63c44b") # commit-id 841a156e67e8e22cd8088e1b58c00afbf2efc30b branch-0.2.1
# set(FAISS_MD5 "87fdd86351ffcaf3f80dc26ade63c44b") # commit-id 841a156e67e8e22cd8088e1b58c00afbf2efc30b branch-0.2.1
# set(FAISS_MD5 "f3b2ce3364c3fa7febd3aa7fdd0fe380") # commit-id 694e03458e6b69ce8a62502f71f69a614af5af8f branch-0.3.0
# set(FAISS_MD5 "bb30722c22390ce5f6759ccb216c1b2a") # commit-id d324db297475286afe107847c7fb7a0f9dc7e90e branch-0.3.0
set(FAISS_MD5 "2293cdb209c3718e3b19f3edae8b32b3") # commit-id a13c1205dc52977a9ad3b33a14efa958604a8bff branch-0.3.0
endif()
else()
set(FAISS_SOURCE_URL "https://github.com/facebookresearch/faiss/archive/v1.5.3.tar.gz")
set(FAISS_MD5 "0bc12737b23def156f6a1eb782050135")
set(FAISS_SOURCE_URL "https://github.com/milvus-io/faiss/archive/1.6.0.tar.gz")
set(FAISS_MD5 "eb96d84f98b078a9eec04a796f5c792e")
endif()
message(STATUS "FAISS URL = ${FAISS_SOURCE_URL}")
......@@ -299,12 +302,29 @@ macro(build_arrow)
${EP_COMMON_CMAKE_ARGS}
-DARROW_BUILD_STATIC=ON
-DARROW_BUILD_SHARED=OFF
-DARROW_PARQUET=OFF
-DARROW_USE_GLOG=OFF
-DCMAKE_INSTALL_PREFIX=${ARROW_PREFIX}
"-DCMAKE_LIBRARY_PATH=${CUDA_TOOLKIT_ROOT_DIR}/lib64/stubs"
-DARROW_CUDA=OFF
-DARROW_FLIGHT=OFF
-DARROW_GANDIVA=OFF
-DARROW_GANDIVA_JAVA=OFF
-DARROW_HDFS=OFF
-DARROW_HIVESERVER2=OFF
-DARROW_ORC=OFF
-DARROW_PARQUET=OFF
-DARROW_PLASMA=OFF
-DARROW_PLASMA_JAVA_CLIENT=OFF
-DARROW_PYTHON=OFF
-DARROW_WITH_BZ2=OFF
-DARROW_WITH_ZLIB=OFF
-DARROW_WITH_LZ4=OFF
-DARROW_WITH_SNAPPY=OFF
-DARROW_WITH_ZSTD=OFF
-DARROW_WITH_BROTLI=OFF
-DCMAKE_BUILD_TYPE=Release
-DARROW_DEPENDENCY_SOURCE=BUNDLED) #Build all arrow dependencies from source instead of calling find_package first
-DARROW_DEPENDENCY_SOURCE=BUNDLED #Build all arrow dependencies from source instead of calling find_package first
-DBOOST_SOURCE=AUTO #try to find BOOST in the system default locations and build from source if not found
)
if(USE_JFROG_CACHE STREQUAL "ON")
......
......@@ -81,27 +81,6 @@ target_link_libraries(
${depend_libs}
)
INSTALL(TARGETS
knowhere
SPTAGLibStatic
DESTINATION
lib)
INSTALL(FILES
${ARROW_STATIC_LIB}
${ARROW_PREFIX}/lib/libjemalloc_pic.a
${FAISS_STATIC_LIB}
${LAPACK_STATIC_LIB}
${BLAS_STATIC_LIB}
DESTINATION
lib
)
INSTALL(FILES ${OPENBLAS_REAL_STATIC_LIB}
RENAME "libopenblas.a"
DESTINATION lib
)
set(INDEX_INCLUDE_DIRS
${INDEX_SOURCE_DIR}/knowhere
${INDEX_SOURCE_DIR}/thirdparty
......@@ -112,17 +91,4 @@ set(INDEX_INCLUDE_DIRS
${LAPACK_INCLUDE_DIR}
)
set(INDEX_INCLUDE_DIRS ${INDEX_INCLUDE_DIRS} PARENT_SCOPE)
#INSTALL(DIRECTORY
# ${INDEX_SOURCE_DIR}/include/knowhere
# ${ARROW_INCLUDE_DIR}/arrow
# ${FAISS_PREFIX}/include/faiss
# ${OPENBLAS_INCLUDE_DIR}/
# DESTINATION
# include)
#
#INSTALL(DIRECTORY
# ${SPTAG_SOURCE_DIR}/AnnService/inc/
# DESTINATION
# include/SPTAG/AnnService/inc)
set(INDEX_INCLUDE_DIRS ${INDEX_INCLUDE_DIRS} PARENT_SCOPE)
\ No newline at end of file
......@@ -17,7 +17,7 @@
#pragma once
#include "utils/easylogging++.h"
#include "external/easyloggingpp/easylogging++.h"
namespace knowhere {
......
......@@ -38,7 +38,7 @@ class FaissBaseIndex {
virtual void
SealImpl();
protected:
public:
std::shared_ptr<faiss::Index> index_ = nullptr;
};
......
......@@ -15,12 +15,12 @@
// specific language governing permissions and limitations
// under the License.
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/gpu/GpuIndexFlat.h>
#include <memory>
#include <faiss/gpu/GpuCloner.h>
#include <faiss/gpu/GpuIndexIVF.h>
#include <faiss/gpu/GpuIndexIVFFlat.h>
#include <faiss/index_io.h>
#include <memory>
#include "knowhere/adapter/VectorAdapter.h"
#include "knowhere/common/Exception.h"
......@@ -86,7 +86,8 @@ GPUIVF::SerializeImpl() {
faiss::Index* index = index_.get();
faiss::Index* host_index = faiss::gpu::index_gpu_to_cpu(index);
SealImpl();
// TODO(linxj): support seal
// SealImpl();
faiss::write_index(host_index, &writer);
delete host_index;
......@@ -130,13 +131,12 @@ void
GPUIVF::search_impl(int64_t n, const float* data, int64_t k, float* distances, int64_t* labels, const Config& cfg) {
std::lock_guard<std::mutex> lk(mutex_);
// TODO(linxj): gpu index support GenParams
if (auto device_index = std::dynamic_pointer_cast<faiss::gpu::GpuIndexIVF>(index_)) {
auto search_cfg = std::dynamic_pointer_cast<IVFCfg>(cfg);
device_index->setNumProbes(search_cfg->nprobe);
device_index->nprobe = search_cfg->nprobe;
// assert(device_index->getNumProbes() == search_cfg->nprobe);
{
// TODO(linxj): allocate gpu mem
ResScope rs(res_, gpu_id_);
device_index->search(n, (float*)data, k, distances, labels);
}
......
......@@ -16,8 +16,10 @@
// under the License.
#include <faiss/IndexIVFPQ.h>
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/gpu/GpuCloner.h>
#include <faiss/gpu/GpuIndexIVFPQ.h>
#include <faiss/index_factory.h>
#include <memory>
#include "knowhere/adapter/VectorAdapter.h"
......
......@@ -15,9 +15,10 @@
// specific language governing permissions and limitations
// under the License.
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/gpu/GpuCloner.h>
#include <faiss/index_factory.h>
#include <memory>
#include <utility>
#include "knowhere/adapter/VectorAdapter.h"
#include "knowhere/common/Exception.h"
......@@ -71,13 +72,4 @@ GPUIVFSQ::CopyGpuToCpu(const Config& config) {
return std::make_shared<IVFSQ>(new_index);
}
void
GPUIVFSQ::search_impl(int64_t n, const float* data, int64_t k, float* distances, int64_t* labels, const Config& cfg) {
#ifdef CUSTOMIZATION
GPUIVF::search_impl(n, data, k, distances, labels, cfg);
#else
IVF::search_impl(n, data, k, distances, labels, cfg);
#endif
}
} // namespace knowhere
......@@ -38,10 +38,6 @@ class GPUIVFSQ : public GPUIVF {
VectorIndexPtr
CopyGpuToCpu(const Config& config) override;
protected:
void
search_impl(int64_t n, const float* data, int64_t k, float* distances, int64_t* labels, const Config& cfg) override;
};
} // namespace knowhere
......@@ -15,11 +15,12 @@
// specific language governing permissions and limitations
// under the License.
#include <faiss/AutoTune.h>
#include <faiss/IndexFlat.h>
#include <faiss/MetaIndexes.h>
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/gpu/GpuCloner.h>
#include <faiss/index_factory.h>
#include <faiss/index_io.h>
#include <vector>
#include "knowhere/adapter/VectorAdapter.h"
......
......@@ -15,15 +15,12 @@
// specific language governing permissions and limitations
// under the License.
#include <faiss/AutoTune.h>
#include <faiss/AuxIndexStructures.h>
#include <faiss/IVFlib.h>
#include <faiss/IndexFlat.h>
#include <faiss/IndexIVF.h>
#include <faiss/IndexIVFFlat.h>
#include <faiss/IndexIVFPQ.h>
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/index_io.h>
#include <faiss/gpu/GpuCloner.h>
#include <chrono>
#include <memory>
#include <utility>
......@@ -224,9 +221,10 @@ IVF::search_impl(int64_t n, const float* data, int64_t k, float* distances, int6
faiss::ivflib::search_with_parameters(index_.get(), n, (float*)data, k, distances, labels, params.get());
stdclock::time_point after = stdclock::now();
double search_cost = (std::chrono::duration<double, std::micro>(after - before)).count();
KNOWHERE_LOG_DEBUG << "IVF search cost: " << search_cost
<< ", quantization cost: " << faiss::indexIVF_stats.quantization_time
<< ", data search cost: " << faiss::indexIVF_stats.search_time;
KNOWHERE_LOG_DEBUG << "K=" << k << " NQ=" << n << " NL=" << faiss::indexIVF_stats.nlist
<< " ND=" << faiss::indexIVF_stats.ndis << " NH=" << faiss::indexIVF_stats.nheap_updates
<< " Q=" << faiss::indexIVF_stats.quantization_time
<< " S=" << faiss::indexIVF_stats.search_time;
faiss::indexIVF_stats.quantization_time = 0;
faiss::indexIVF_stats.search_time = 0;
}
......
......@@ -30,7 +30,7 @@ namespace knowhere {
using Graph = std::vector<std::vector<int64_t>>;
class IVF : public VectorIndex, protected FaissBaseIndex {
class IVF : public VectorIndex, public FaissBaseIndex {
public:
IVF() : FaissBaseIndex(nullptr) {
}
......
......@@ -15,7 +15,8 @@
// specific language governing permissions and limitations
// under the License.
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/gpu/GpuCloner.h>
#include <faiss/index_factory.h>
#include <memory>
#include "knowhere/adapter/VectorAdapter.h"
......@@ -56,14 +57,7 @@ IVFSQ::CopyCpuToGpu(const int64_t& device_id, const Config& config) {
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(device_id)) {
ResScope rs(res, device_id, false);
#ifdef CUSTOMIZATION
faiss::gpu::GpuClonerOptions option;
option.allInGpu = true;
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), device_id, index_.get(), &option);
#else
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), device_id, index_.get());
#endif
std::shared_ptr<faiss::Index> device_index;
device_index.reset(gpu_index);
......
......@@ -17,19 +17,25 @@
// under the License.
#include "knowhere/index/vector_index/IndexIVFSQHybrid.h"
#include <utility>
#include "faiss/AutoTune.h"
#include "faiss/gpu/GpuAutoTune.h"
#include "faiss/gpu/GpuIndexIVF.h"
#include "knowhere/adapter/VectorAdapter.h"
#include "knowhere/common/Exception.h"
#include <utility>
#include <faiss/gpu/GpuCloner.h>
#include <faiss/gpu/GpuIndexIVF.h>
#include <faiss/index_factory.h>
namespace knowhere {
#ifdef CUSTOMIZATION
// std::mutex g_mutex;
IndexModelPtr
IVFSQHybrid::Train(const DatasetPtr& dataset, const Config& config) {
// std::lock_guard<std::mutex> lk(g_mutex);
auto build_cfg = std::dynamic_pointer_cast<IVFSQCfg>(config);
if (build_cfg != nullptr) {
build_cfg->CheckValid(); // throw exception
......@@ -63,19 +69,17 @@ IVFSQHybrid::Train(const DatasetPtr& dataset, const Config& config) {
VectorIndexPtr
IVFSQHybrid::CopyGpuToCpu(const Config& config) {
if (gpu_mode == 0) {
return std::make_shared<IVFSQHybrid>(index_);
}
std::lock_guard<std::mutex> lk(mutex_);
if (auto device_idx = std::dynamic_pointer_cast<faiss::IndexIVF>(index_)) {
faiss::Index* device_index = index_.get();
faiss::Index* host_index = faiss::gpu::index_gpu_to_cpu(device_index);
faiss::Index* device_index = index_.get();
faiss::Index* host_index = faiss::gpu::index_gpu_to_cpu(device_index);
std::shared_ptr<faiss::Index> new_index;
new_index.reset(host_index);
return std::make_shared<IVFSQHybrid>(new_index);
} else {
// TODO(linxj): why? jinhai
return std::make_shared<IVFSQHybrid>(index_);
}
std::shared_ptr<faiss::Index> new_index;
new_index.reset(host_index);
return std::make_shared<IVFSQHybrid>(new_index);
}
VectorIndexPtr
......@@ -85,13 +89,9 @@ IVFSQHybrid::CopyCpuToGpu(const int64_t& device_id, const Config& config) {
faiss::gpu::GpuClonerOptions option;
option.allInGpu = true;
faiss::IndexComposition index_composition;
index_composition.index = index_.get();
index_composition.quantizer = nullptr;
index_composition.mode = 0; // copy all
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), device_id, &index_composition, &option);
auto idx = dynamic_cast<faiss::IndexIVF*>(index_.get());
idx->restore_quantizer();
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), device_id, index_.get(), &option);
std::shared_ptr<faiss::Index> device_index = std::shared_ptr<faiss::Index>(gpu_index);
auto new_idx = std::make_shared<IVFSQHybrid>(device_index, device_id, res);
return new_idx;
......@@ -105,16 +105,26 @@ IVFSQHybrid::LoadImpl(const BinarySet& index_binary) {
FaissBaseIndex::LoadImpl(index_binary); // load on cpu
auto* ivf_index = dynamic_cast<faiss::IndexIVF*>(index_.get());
ivf_index->backup_quantizer();
gpu_mode = 0;
}
void
IVFSQHybrid::search_impl(int64_t n, const float* data, int64_t k, float* distances, int64_t* labels,
const Config& cfg) {
// std::lock_guard<std::mutex> lk(g_mutex);
// static int64_t search_count;
// ++search_count;
if (gpu_mode == 2) {
GPUIVF::search_impl(n, data, k, distances, labels, cfg);
} else if (gpu_mode == 1) {
ResScope rs(res_, gpu_id_);
IVF::search_impl(n, data, k, distances, labels, cfg);
// index_->search(n, (float*)data, k, distances, labels);
} else if (gpu_mode == 1) { // hybrid
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(quantizer_gpu_id_)) {
ResScope rs(res, quantizer_gpu_id_, true);
IVF::search_impl(n, data, k, distances, labels, cfg);
} else {
KNOWHERE_THROW_MSG("Hybrid Search Error, can't get gpu: " + std::to_string(quantizer_gpu_id_) + "resource");
}
} else if (gpu_mode == 0) {
IVF::search_impl(n, data, k, distances, labels, cfg);
}
......@@ -122,16 +132,18 @@ IVFSQHybrid::search_impl(int64_t n, const float* data, int64_t k, float* distanc
QuantizerPtr
IVFSQHybrid::LoadQuantizer(const Config& conf) {
// std::lock_guard<std::mutex> lk(g_mutex);
auto quantizer_conf = std::dynamic_pointer_cast<QuantizerCfg>(conf);
if (quantizer_conf != nullptr) {
if (quantizer_conf->mode != 1) {
KNOWHERE_THROW_MSG("mode only support 1 in this func");
}
}
gpu_id_ = quantizer_conf->gpu_id;
auto gpu_id = quantizer_conf->gpu_id;
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(gpu_id_)) {
ResScope rs(res, gpu_id_, false);
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(gpu_id)) {
ResScope rs(res, gpu_id, false);
faiss::gpu::GpuClonerOptions option;
option.allInGpu = true;
......@@ -140,7 +152,7 @@ IVFSQHybrid::LoadQuantizer(const Config& conf) {
index_composition->quantizer = nullptr;
index_composition->mode = quantizer_conf->mode; // only 1
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), gpu_id_, index_composition, &option);
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), gpu_id, index_composition, &option);
delete gpu_index;
auto q = std::make_shared<FaissIVFQuantizer>();
......@@ -148,16 +160,19 @@ IVFSQHybrid::LoadQuantizer(const Config& conf) {
auto& q_ptr = index_composition->quantizer;
q->size = q_ptr->d * q_ptr->getNumVecs() * sizeof(float);
q->quantizer = q_ptr;
q->gpu_id = gpu_id;
res_ = res;
gpu_mode = 1;
return q;
} else {
KNOWHERE_THROW_MSG("CopyCpuToGpu Error, can't get gpu: " + std::to_string(gpu_id_) + "resource");
KNOWHERE_THROW_MSG("CopyCpuToGpu Error, can't get gpu: " + std::to_string(gpu_id) + "resource");
}
}
void
IVFSQHybrid::SetQuantizer(const QuantizerPtr& q) {
// std::lock_guard<std::mutex> lk(g_mutex);
auto ivf_quantizer = std::dynamic_pointer_cast<FaissIVFQuantizer>(q);
if (ivf_quantizer == nullptr) {
KNOWHERE_THROW_MSG("Quantizer type error");
......@@ -170,20 +185,27 @@ IVFSQHybrid::SetQuantizer(const QuantizerPtr& q) {
// delete ivf_index->quantizer;
ivf_index->quantizer = ivf_quantizer->quantizer;
}
quantizer_gpu_id_ = ivf_quantizer->gpu_id;
gpu_mode = 1;
}
void
IVFSQHybrid::UnsetQuantizer() {
// std::lock_guard<std::mutex> lk(g_mutex);
auto* ivf_index = dynamic_cast<faiss::IndexIVF*>(index_.get());
if (ivf_index == nullptr) {
KNOWHERE_THROW_MSG("Index type error");
}
ivf_index->quantizer = nullptr;
quantizer_gpu_id_ = -1;
}
VectorIndexPtr
IVFSQHybrid::LoadData(const knowhere::QuantizerPtr& q, const Config& conf) {
// std::lock_guard<std::mutex> lk(g_mutex);
auto quantizer_conf = std::dynamic_pointer_cast<QuantizerCfg>(conf);
if (quantizer_conf != nullptr) {
if (quantizer_conf->mode != 2) {
......@@ -192,13 +214,11 @@ IVFSQHybrid::LoadData(const knowhere::QuantizerPtr& q, const Config& conf) {
} else {
KNOWHERE_THROW_MSG("conf error");
}
// if (quantizer_conf->gpu_id != gpu_id_) {
// KNOWHERE_THROW_MSG("quantizer and data must on the same gpu card");
// }
gpu_id_ = quantizer_conf->gpu_id;
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(gpu_id_)) {
ResScope rs(res, gpu_id_, false);
auto gpu_id = quantizer_conf->gpu_id;
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(gpu_id)) {
ResScope rs(res, gpu_id, false);
faiss::gpu::GpuClonerOptions option;
option.allInGpu = true;
......@@ -211,18 +231,20 @@ IVFSQHybrid::LoadData(const knowhere::QuantizerPtr& q, const Config& conf) {
index_composition->quantizer = ivf_quantizer->quantizer;
index_composition->mode = quantizer_conf->mode; // only 2
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), gpu_id_, index_composition, &option);
auto gpu_index = faiss::gpu::index_cpu_to_gpu(res->faiss_res.get(), gpu_id, index_composition, &option);
std::shared_ptr<faiss::Index> new_idx;
new_idx.reset(gpu_index);
auto sq_idx = std::make_shared<IVFSQHybrid>(new_idx, gpu_id_, res);
auto sq_idx = std::make_shared<IVFSQHybrid>(new_idx, gpu_id, res);
return sq_idx;
} else {
KNOWHERE_THROW_MSG("CopyCpuToGpu Error, can't get gpu: " + std::to_string(gpu_id_) + "resource");
KNOWHERE_THROW_MSG("CopyCpuToGpu Error, can't get gpu: " + std::to_string(gpu_id) + "resource");
}
}
std::pair<VectorIndexPtr, QuantizerPtr>
IVFSQHybrid::CopyCpuToGpuWithQuantizer(const int64_t& device_id, const Config& config) {
// std::lock_guard<std::mutex> lk(g_mutex);
if (auto res = FaissGpuResourceMgr::GetInstance().GetRes(device_id)) {
ResScope rs(res, device_id, false);
faiss::gpu::GpuClonerOptions option;
......@@ -242,12 +264,29 @@ IVFSQHybrid::CopyCpuToGpuWithQuantizer(const int64_t& device_id, const Config& c
auto q = std::make_shared<FaissIVFQuantizer>();
q->quantizer = index_composition.quantizer;
q->size = index_composition.quantizer->d * index_composition.quantizer->getNumVecs() * sizeof(float);
q->gpu_id = device_id;
return std::make_pair(new_idx, q);
} else {
KNOWHERE_THROW_MSG("CopyCpuToGpu Error, can't get gpu: " + std::to_string(gpu_id_) + "resource");
}
}
void
IVFSQHybrid::set_index_model(IndexModelPtr model) {
std::lock_guard<std::mutex> lk(mutex_);
auto host_index = std::static_pointer_cast<IVFIndexModel>(model);
if (auto gpures = FaissGpuResourceMgr::GetInstance().GetRes(gpu_id_)) {
ResScope rs(gpures, gpu_id_, false);
auto device_index = faiss::gpu::index_cpu_to_gpu(gpures->faiss_res.get(), gpu_id_, host_index->index_.get());
index_.reset(device_index);
res_ = gpures;
gpu_mode = 2;
} else {
KNOWHERE_THROW_MSG("load index model error, can't get gpu_resource");
}
}
FaissIVFQuantizer::~FaissIVFQuantizer() {
if (quantizer != nullptr) {
delete quantizer;
......@@ -307,5 +346,10 @@ IVFSQHybrid::LoadImpl(const BinarySet& index_binary) {
GPUIVF::LoadImpl(index_binary);
}
void
IVFSQHybrid::set_index_model(IndexModelPtr model) {
GPUIVF::set_index_model(model);
}
#endif
} // namespace knowhere
......@@ -17,7 +17,9 @@
#pragma once
#include <faiss/gpu/GpuIndexFlat.h>
#include <faiss/index_io.h>
#include <memory>
#include <utility>
......@@ -29,6 +31,7 @@ namespace knowhere {
#ifdef CUSTOMIZATION
struct FaissIVFQuantizer : public Quantizer {
faiss::gpu::GpuIndexFlat* quantizer = nullptr;
int64_t gpu_id;
~FaissIVFQuantizer() override;
};
......@@ -52,6 +55,9 @@ class IVFSQHybrid : public GPUIVFSQ {
}
public:
void
set_index_model(IndexModelPtr model) override;
QuantizerPtr
LoadQuantizer(const Config& conf);
......@@ -85,6 +91,7 @@ class IVFSQHybrid : public GPUIVFSQ {
protected:
int64_t gpu_mode = 0; // 0,1,2
int64_t quantizer_gpu_id_ = -1;
};
} // namespace knowhere
......@@ -48,6 +48,7 @@ class VectorIndex : public Index {
virtual void
Seal() = 0;
// TODO(linxj): Deprecated
virtual VectorIndexPtr
Clone() = 0;
......
......@@ -17,7 +17,7 @@
#pragma once
#include <faiss/AuxIndexStructures.h>
#include <faiss/impl/io.h>
namespace knowhere {
......
......@@ -3,4 +3,4 @@ BOOST_VERSION=1.70.0
GTEST_VERSION=1.8.1
LAPACK_VERSION=v3.8.0
OPENBLAS_VERSION=v0.3.6
FAISS_VERSION=branch-0.2.1
\ No newline at end of file
FAISS_VERSION=branch-0.3.0
\ No newline at end of file
......@@ -20,7 +20,7 @@ set(basic_libs
)
set(util_srcs
${MILVUS_ENGINE_SRC}/utils/easylogging++.cc
${MILVUS_ENGINE_SRC}/external/easyloggingpp/easylogging++.cc
${INDEX_SOURCE_DIR}/knowhere/knowhere/index/vector_index/helpers/FaissGpuResourceMgr.cpp
${INDEX_SOURCE_DIR}/knowhere/knowhere/index/vector_index/helpers/FaissIO.cpp
${INDEX_SOURCE_DIR}/knowhere/knowhere/index/vector_index/helpers/IndexParameter.cpp
......@@ -86,5 +86,6 @@ install(TARGETS test_gpuresource DESTINATION unittest)
install(TARGETS test_customized_index DESTINATION unittest)
#add_subdirectory(faiss_ori)
#add_subdirectory(faiss_benchmark)
add_subdirectory(test_nsg)
......@@ -26,7 +26,7 @@
#include "knowhere/index/vector_index/IndexIVFSQ.h"
#include "knowhere/index/vector_index/IndexIVFSQHybrid.h"
constexpr int DEVICEID = 0;
int DEVICEID = 0;
constexpr int64_t DIM = 128;
constexpr int64_t NB = 10000;
constexpr int64_t NQ = 10;
......
include_directories(${INDEX_SOURCE_DIR}/thirdparty)
include_directories(${INDEX_SOURCE_DIR}/include)
include_directories(/usr/local/cuda/include)
include_directories(/usr/local/hdf5/include)
link_directories(/usr/local/cuda/lib64)
link_directories(/usr/local/hdf5/lib)
set(unittest_libs
gtest gmock gtest_main gmock_main)
set(depend_libs
faiss openblas lapack hdf5
arrow ${ARROW_PREFIX}/lib/libjemalloc_pic.a
)
set(basic_libs
cudart cublas
gomp gfortran pthread
)
add_executable(test_faiss_benchmark faiss_benchmark_test.cpp)
target_link_libraries(test_faiss_benchmark ${depend_libs} ${unittest_libs} ${basic_libs})
install(TARGETS test_faiss_benchmark DESTINATION unittest)
### To run this FAISS benchmark, please follow these steps:
#### Step 1:
Download the HDF5 source from:
https://support.hdfgroup.org/ftp/HDF5/releases/
and build/install to "/usr/local/hdf5".
#### Step 2:
Download HDF5 data files from:
https://github.com/erikbern/ann-benchmarks
#### Step 3:
Update 'milvus/core/src/index/unittest/CMakeLists.txt',
uncomment "#add_subdirectory(faiss_benchmark)".
#### Step 4:
Build Milvus with unittest enabled: "./build.sh -t Release -u",
binary 'test_faiss_benchmark' will be generated.
#### Step 5:
Put HDF5 data files into the same directory with binary 'test_faiss_benchmark'.
#### Step 6:
Run test binary 'test_faiss_benchmark'.
......@@ -16,17 +16,23 @@
// under the License.
#include <gtest/gtest.h>
#include <thread>
#include "unittest/Helper.h"
#include "unittest/utils.h"
#include "knowhere/common/Timer.h"
class SingleIndexTest : public DataGen, public TestGpuIndexBase {
protected:
void
SetUp() override {
TestGpuIndexBase::SetUp();
Generate(DIM, NB, NQ);
k = K;
nb = 1000000;
nq = 1000;
dim = DIM;
Generate(dim, nb, nq);
k = 1000;
}
void
......@@ -119,4 +125,113 @@ TEST_F(SingleIndexTest, IVFSQHybrid) {
}
}
// TEST_F(SingleIndexTest, thread_safe) {
// assert(!xb.empty());
//
// index_type = "IVFSQHybrid";
// index_ = IndexFactory(index_type);
// auto base = ParamGenerator::GetInstance().Gen(ParameterType::ivfsq);
// auto conf = std::dynamic_pointer_cast<knowhere::IVFSQCfg>(base);
// conf->nlist = 16384;
// conf->k = k;
// conf->nprobe = 10;
// conf->d = dim;
// auto preprocessor = index_->BuildPreprocessor(base_dataset, conf);
// index_->set_preprocessor(preprocessor);
//
// auto model = index_->Train(base_dataset, conf);
// index_->set_index_model(model);
// index_->Add(base_dataset, conf);
// EXPECT_EQ(index_->Count(), nb);
// EXPECT_EQ(index_->Dimension(), dim);
//
// auto binaryset = index_->Serialize();
//
//
//
// auto cpu_idx = std::make_shared<knowhere::IVFSQHybrid>(DEVICEID);
// cpu_idx->Load(binaryset);
// auto pair = cpu_idx->CopyCpuToGpuWithQuantizer(DEVICEID, conf);
// auto quantizer = pair.second;
//
// auto quantizer_conf = std::make_shared<knowhere::QuantizerCfg>();
// quantizer_conf->mode = 2; // only copy data
// quantizer_conf->gpu_id = DEVICEID;
//
// auto CopyAllToGpu = [&](int64_t search_count, bool do_search = false) {
// for (int i = 0; i < search_count; ++i) {
// auto gpu_idx = cpu_idx->CopyCpuToGpu(DEVICEID, conf);
// if (do_search) {
// auto result = gpu_idx->Search(query_dataset, conf);
// AssertAnns(result, nq, conf->k);
// }
// }
// };
//
// auto hybrid_qt_idx = std::make_shared<knowhere::IVFSQHybrid>(DEVICEID);
// hybrid_qt_idx->Load(binaryset);
// auto SetQuantizerDoSearch = [&](int64_t search_count) {
// for (int i = 0; i < search_count; ++i) {
// hybrid_qt_idx->SetQuantizer(quantizer);
// auto result = hybrid_qt_idx->Search(query_dataset, conf);
// AssertAnns(result, nq, conf->k);
// // PrintResult(result, nq, k);
// hybrid_qt_idx->UnsetQuantizer();
// }
// };
//
// auto hybrid_data_idx = std::make_shared<knowhere::IVFSQHybrid>(DEVICEID);
// hybrid_data_idx->Load(binaryset);
// auto LoadDataDoSearch = [&](int64_t search_count, bool do_search = false) {
// for (int i = 0; i < search_count; ++i) {
// auto hybrid_idx = hybrid_data_idx->LoadData(quantizer, quantizer_conf);
// if (do_search) {
// auto result = hybrid_idx->Search(query_dataset, conf);
//// AssertAnns(result, nq, conf->k);
// }
// }
// };
//
// knowhere::TimeRecorder tc("");
// CopyAllToGpu(2000/2, false);
// tc.RecordSection("CopyAllToGpu witout search");
// CopyAllToGpu(400/2, true);
// tc.RecordSection("CopyAllToGpu with search");
// SetQuantizerDoSearch(6);
// tc.RecordSection("SetQuantizer with search");
// LoadDataDoSearch(2000/2, false);
// tc.RecordSection("LoadData without search");
// LoadDataDoSearch(400/2, true);
// tc.RecordSection("LoadData with search");
//
// {
// std::thread t1(CopyAllToGpu, 2000, false);
// std::thread t2(CopyAllToGpu, 400, true);
// t1.join();
// t2.join();
// }
//
// {
// std::thread t1(SetQuantizerDoSearch, 12);
// std::thread t2(CopyAllToGpu, 400, true);
// t1.join();
// t2.join();
// }
//
// {
// std::thread t1(SetQuantizerDoSearch, 12);
// std::thread t2(LoadDataDoSearch, 400, true);
// t1.join();
// t2.join();
// }
//
// {
// std::thread t1(LoadDataDoSearch, 2000, false);
// std::thread t2(LoadDataDoSearch, 400, true);
// t1.join();
// t2.join();
// }
//
//}
#endif
......@@ -20,19 +20,12 @@
#include <iostream>
#include <thread>
#include <faiss/AutoTune.h>
#include <faiss/gpu/GpuAutoTune.h>
#include <faiss/gpu/GpuIndexIVFFlat.h>
#include "knowhere/common/Exception.h"
#include "knowhere/common/Timer.h"
#include "knowhere/index/vector_index/IndexGPUIVF.h"
#include "knowhere/index/vector_index/IndexGPUIVFPQ.h"
#include "knowhere/index/vector_index/IndexGPUIVFSQ.h"
#include "knowhere/index/vector_index/IndexIVF.h"
#include "knowhere/index/vector_index/IndexIVFPQ.h"
#include "knowhere/index/vector_index/IndexIVFSQ.h"
#include "knowhere/index/vector_index/IndexIVFSQHybrid.h"
#include "knowhere/index/vector_index/helpers/Cloner.h"
#include "unittest/Helper.h"
......@@ -51,6 +44,9 @@ class IVFTest : public DataGen, public TestWithParam<::std::tuple<std::string, P
ParameterType parameter_type;
std::tie(index_type, parameter_type) = GetParam();
// Init_with_default();
// nb = 1000000;
// nq = 1000;
// k = 1000;
Generate(DIM, NB, NQ);
index_ = IndexFactory(index_type);
conf = ParamGenerator::GetInstance().Gen(parameter_type);
......@@ -61,16 +57,6 @@ class IVFTest : public DataGen, public TestWithParam<::std::tuple<std::string, P
knowhere::FaissGpuResourceMgr::GetInstance().Free();
}
knowhere::VectorIndexPtr
ChooseTodo() {
std::vector<std::string> gpu_idx{"GPUIVFSQ"};
auto finder = std::find(gpu_idx.cbegin(), gpu_idx.cend(), index_type);
if (finder != gpu_idx.cend()) {
return knowhere::cloner::CopyCpuToGpu(index_, DEVICEID, knowhere::Config());
}
return index_;
}
protected:
std::string index_type;
knowhere::Config conf;
......@@ -100,8 +86,7 @@ TEST_P(IVFTest, ivf_basic) {
EXPECT_EQ(index_->Count(), nb);
EXPECT_EQ(index_->Dimension(), dim);
auto new_idx = ChooseTodo();
auto result = new_idx->Search(query_dataset, conf);
auto result = index_->Search(query_dataset, conf);
AssertAnns(result, nq, conf->k);
// PrintResult(result, nq, k);
}
......@@ -134,8 +119,7 @@ TEST_P(IVFTest, ivf_serialize) {
index_->set_index_model(model);
index_->Add(base_dataset, conf);
auto new_idx = ChooseTodo();
auto result = new_idx->Search(query_dataset, conf);
auto result = index_->Search(query_dataset, conf);
AssertAnns(result, nq, conf->k);
}
......@@ -159,8 +143,7 @@ TEST_P(IVFTest, ivf_serialize) {
index_->Load(binaryset);
EXPECT_EQ(index_->Count(), nb);
EXPECT_EQ(index_->Dimension(), dim);
auto new_idx = ChooseTodo();
auto result = new_idx->Search(query_dataset, conf);
auto result = index_->Search(query_dataset, conf);
AssertAnns(result, nq, conf->k);
}
}
......@@ -176,8 +159,7 @@ TEST_P(IVFTest, clone_test) {
index_->Add(base_dataset, conf);
EXPECT_EQ(index_->Count(), nb);
EXPECT_EQ(index_->Dimension(), dim);
auto new_idx = ChooseTodo();
auto result = new_idx->Search(query_dataset, conf);
auto result = index_->Search(query_dataset, conf);
AssertAnns(result, nq, conf->k);
// PrintResult(result, nq, k);
......@@ -210,12 +192,6 @@ TEST_P(IVFTest, clone_test) {
// }
// }
{
if (index_type == "IVFSQHybrid") {
return;
}
}
{
// copy from gpu to cpu
std::vector<std::string> support_idx_vec{"GPUIVF", "GPUIVFSQ", "IVFSQHybrid"};
......@@ -237,6 +213,10 @@ TEST_P(IVFTest, clone_test) {
}
}
if (index_type == "IVFSQHybrid") {
return;
}
{
// copy to gpu
std::vector<std::string> support_idx_vec{"IVF", "GPUIVF", "IVFSQ", "GPUIVFSQ"};
......@@ -277,8 +257,7 @@ TEST_P(IVFTest, gpu_seal_test) {
index_->Add(base_dataset, conf);
EXPECT_EQ(index_->Count(), nb);
EXPECT_EQ(index_->Dimension(), dim);
auto new_idx = ChooseTodo();
auto result = new_idx->Search(query_dataset, conf);
auto result = index_->Search(query_dataset, conf);
AssertAnns(result, nq, conf->k);
auto cpu_idx = knowhere::cloner::CopyGpuToCpu(index_, knowhere::Config());
......
......@@ -22,12 +22,12 @@
#include <cstring>
#include <string>
#include "../version.h"
#include "external/easyloggingpp/easylogging++.h"
#include "metrics/Metrics.h"
#include "server/Server.h"
#include "src/version.h"
#include "utils/CommonUtil.h"
#include "utils/SignalUtil.h"
#include "utils/easylogging++.h"
INITIALIZE_EASYLOGGINGPP
......
......@@ -54,7 +54,7 @@ ShortestPath(const ResourcePtr& src, const ResourcePtr& dest, const ResourceMgrP
auto cur_neighbours = cur_node->GetNeighbours();
for (auto& neighbour : cur_neighbours) {
auto neighbour_res = std::static_pointer_cast<Resource>(neighbour.neighbour_node.lock());
auto neighbour_res = std::static_pointer_cast<Resource>(neighbour.neighbour_node);
dis_matrix[name_id_map.at(res->name())][name_id_map.at(neighbour_res->name())] =
neighbour.connection.transport_cost();
}
......
......@@ -34,27 +34,30 @@ namespace scheduler {
class BuildMgr {
public:
explicit BuildMgr(int64_t numoftasks) : numoftasks_(numoftasks) {
explicit BuildMgr(int64_t concurrent_limit) : available_(concurrent_limit) {
}
public:
void
Put() {
++numoftasks_;
std::lock_guard<std::mutex> lock(mutex_);
++available_;
}
void
take() {
--numoftasks_;
}
int64_t
numoftasks() {
return (int64_t)numoftasks_;
bool
Take() {
std::lock_guard<std::mutex> lock(mutex_);
if (available_ < 1) {
return false;
} else {
--available_;
return true;
}
}
private:
std::atomic_long numoftasks_;
std::int64_t available_;
std::mutex mutex_;
};
using BuildMgrPtr = std::shared_ptr<BuildMgr>;
......
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
#pragma once
#include <atomic>
#include <condition_variable>
#include <deque>
#include <list>
#include <memory>
#include <mutex>
#include <queue>
#include <string>
#include <thread>
#include <unordered_map>
#include <utility>
#include <vector>
namespace milvus {
namespace scheduler {
template <typename T>
class CircleQueue {
using value_type = T;
using atomic_size_type = std::atomic_ullong;
using size_type = uint64_t;
using const_reference = const value_type&;
#define MEMORY_ORDER (std::memory_order::memory_order_seq_cst)
public:
explicit CircleQueue(size_type cap) : data_(cap, nullptr), capacity_(cap), front_() {
front_.store(cap - 1, MEMORY_ORDER);
}
CircleQueue() = delete;
CircleQueue(const CircleQueue& q) = delete;
CircleQueue(CircleQueue&& q) = delete;
public:
const_reference operator[](size_type n) {
return data_[n % capacity_];
}
size_type
front() {
return front_.load(MEMORY_ORDER);
}
size_type
rear() {
return rear_;
}
size_type
size() {
return size_;
}
size_type
capacity() {
return capacity_;
}
void
set_front(uint64_t last_finish) {
if (last_finish == rear_) {
throw;
}
front_.store(last_finish % capacity_, MEMORY_ORDER);
}
void
put(const value_type& x) {
if ((rear_) % capacity_ == front_.load(MEMORY_ORDER)) {
throw;
}
data_[rear_] = x;
rear_ = ++rear_ % capacity_;
if (size_ < capacity_) {
++size_;
}
}
void
put(value_type&& x) {
if ((rear_) % capacity_ == front_.load(MEMORY_ORDER)) {
throw;
}
data_[rear_] = std::move(x);
rear_ = ++rear_ % capacity_;
if (size_ < capacity_) {
++size_;
}
}
private:
std::vector<value_type> data_;
size_type capacity_;
atomic_size_type front_;
size_type rear_ = 0;
size_type size_ = 0;
#undef MEMORY_ORDER
};
} // namespace scheduler
} // namespace milvus
......@@ -49,6 +49,15 @@ JobMgr::Stop() {
}
}
json
JobMgr::Dump() const {
json ret{
{"running", running_},
{"event_queue_length", queue_.size()},
};
return ret;
}
void
JobMgr::Put(const JobPtr& job) {
{
......@@ -82,7 +91,7 @@ JobMgr::worker_function() {
// disk resources NEVER be empty.
if (auto disk = res_mgr_->GetDiskResources()[0].lock()) {
for (auto& task : tasks) {
disk->task_table().Put(task);
disk->task_table().Put(task, nullptr);
}
}
}
......@@ -95,20 +104,25 @@ JobMgr::build_task(const JobPtr& job) {
void
JobMgr::calculate_path(const TaskPtr& task) {
if (task->type_ != TaskType::SearchTask) {
return;
}
if (task->type_ == TaskType::SearchTask) {
if (task->label()->Type() != TaskLabelType::SPECIFIED_RESOURCE) {
return;
}
if (task->label()->Type() != TaskLabelType::SPECIFIED_RESOURCE) {
return;
std::vector<std::string> path;
auto spec_label = std::static_pointer_cast<SpecResLabel>(task->label());
auto src = res_mgr_->GetDiskResources()[0];
auto dest = spec_label->resource();
ShortestPath(src.lock(), dest.lock(), res_mgr_, path);
task->path() = Path(path, path.size() - 1);
} else if (task->type_ == TaskType::BuildIndexTask) {
auto spec_label = std::static_pointer_cast<SpecResLabel>(task->label());
auto src = res_mgr_->GetDiskResources()[0];
auto dest = spec_label->resource();
std::vector<std::string> path;
ShortestPath(src.lock(), dest.lock(), res_mgr_, path);
task->path() = Path(path, path.size() - 1);
}
std::vector<std::string> path;
auto spec_label = std::static_pointer_cast<SpecResLabel>(task->label());
auto src = res_mgr_->GetDiskResources()[0];
auto dest = spec_label->resource();
ShortestPath(src.lock(), dest.lock(), res_mgr_, path);
task->path() = Path(path, path.size() - 1);
}
} // namespace scheduler
......
......@@ -28,13 +28,14 @@
#include <vector>
#include "ResourceMgr.h"
#include "interface/interfaces.h"
#include "job/Job.h"
#include "task/Task.h"
namespace milvus {
namespace scheduler {
class JobMgr {
class JobMgr : public interface::dumpable {
public:
explicit JobMgr(ResourceMgrPtr res_mgr);
......@@ -44,6 +45,9 @@ class JobMgr {
void
Stop();
json
Dump() const override;
public:
void
Put(const JobPtr& job);
......
......@@ -170,16 +170,20 @@ ResourceMgr::GetNumGpuResource() const {
return num;
}
std::string
ResourceMgr::Dump() {
std::stringstream ss;
ss << "ResourceMgr contains " << resources_.size() << " resources." << std::endl;
json
ResourceMgr::Dump() const {
json resources{};
for (auto& res : resources_) {
ss << res->Dump();
resources.push_back(res->Dump());
}
return ss.str();
json ret{
{"number_of_resource", resources_.size()},
{"number_of_disk_resource", disk_resources_.size()},
{"number_of_cpu_resource", cpu_resources_.size()},
{"number_of_gpu_resource", gpu_resources_.size()},
{"resources", resources},
};
return ret;
}
std::string
......@@ -187,9 +191,9 @@ ResourceMgr::DumpTaskTables() {
std::stringstream ss;
ss << ">>>>>>>>>>>>>>>ResourceMgr::DumpTaskTable<<<<<<<<<<<<<<<" << std::endl;
for (auto& resource : resources_) {
ss << resource->Dump() << std::endl;
ss << resource->task_table().Dump();
ss << resource->Dump() << std::endl << std::endl;
ss << resource->name() << std::endl;
ss << resource->task_table().Dump().dump();
ss << resource->name() << std::endl << std::endl;
}
return ss.str();
}
......
......@@ -25,13 +25,14 @@
#include <utility>
#include <vector>
#include "interface/interfaces.h"
#include "resource/Resource.h"
#include "utils/Log.h"
namespace milvus {
namespace scheduler {
class ResourceMgr {
class ResourceMgr : public interface::dumpable {
public:
ResourceMgr() = default;
......@@ -74,7 +75,6 @@ class ResourceMgr {
return gpu_resources_;
}
// TODO(wxyu): why return shared pointer
inline std::vector<ResourcePtr>
GetAllResources() {
return resources_;
......@@ -103,8 +103,8 @@ class ResourceMgr {
public:
/******** Utility Functions ********/
std::string
Dump();
json
Dump() const override;
std::string
DumpTaskTables();
......
......@@ -55,8 +55,8 @@ load_simple_config() {
// get resources
auto gpu_ids = get_gpu_pool();
int32_t build_gpu_id;
config.GetResourceConfigIndexBuildDevice(build_gpu_id);
int32_t index_build_device_id;
config.GetResourceConfigIndexBuildDevice(index_build_device_id);
// create and connect
ResMgrInst::GetInstance()->Add(ResourceFactory::Create("disk", "DISK", 0, true, false));
......@@ -70,91 +70,21 @@ load_simple_config() {
for (auto& gpu_id : gpu_ids) {
ResMgrInst::GetInstance()->Add(ResourceFactory::Create(std::to_string(gpu_id), "GPU", gpu_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(gpu_id), pcie);
if (build_gpu_id == gpu_id) {
if (index_build_device_id == gpu_id) {
find_build_gpu_id = true;
}
}
if (not find_build_gpu_id) {
if (not find_build_gpu_id && index_build_device_id != server::CPU_DEVICE_ID) {
ResMgrInst::GetInstance()->Add(
ResourceFactory::Create(std::to_string(build_gpu_id), "GPU", build_gpu_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(build_gpu_id), pcie);
ResourceFactory::Create(std::to_string(index_build_device_id), "GPU", index_build_device_id, true, true));
ResMgrInst::GetInstance()->Connect("cpu", std::to_string(index_build_device_id), pcie);
}
}
void
load_advance_config() {
// try {
// server::ConfigNode &config = server::Config::GetInstance().GetConfig(server::CONFIG_RESOURCE);
//
// if (config.GetChildren().empty()) throw "resource_config null exception";
//
// auto resources = config.GetChild(server::CONFIG_RESOURCES).GetChildren();
//
// if (resources.empty()) throw "Children of resource_config null exception";
//
// for (auto &resource : resources) {
// auto &resname = resource.first;
// auto &resconf = resource.second;
// auto type = resconf.GetValue(server::CONFIG_RESOURCE_TYPE);
//// auto memory = resconf.GetInt64Value(server::CONFIG_RESOURCE_MEMORY);
// auto device_id = resconf.GetInt64Value(server::CONFIG_RESOURCE_DEVICE_ID);
//// auto enable_loader = resconf.GetBoolValue(server::CONFIG_RESOURCE_ENABLE_LOADER);
// auto enable_loader = true;
// auto enable_executor = resconf.GetBoolValue(server::CONFIG_RESOURCE_ENABLE_EXECUTOR);
// auto pinned_memory = resconf.GetInt64Value(server::CONFIG_RESOURCE_PIN_MEMORY);
// auto temp_memory = resconf.GetInt64Value(server::CONFIG_RESOURCE_TEMP_MEMORY);
// auto resource_num = resconf.GetInt64Value(server::CONFIG_RESOURCE_NUM);
//
// auto res = ResMgrInst::GetInstance()->Add(ResourceFactory::Create(resname,
// type,
// device_id,
// enable_loader,
// enable_executor));
//
// if (res.lock()->type() == ResourceType::GPU) {
// auto pinned_memory = resconf.GetInt64Value(server::CONFIG_RESOURCE_PIN_MEMORY, 300);
// auto temp_memory = resconf.GetInt64Value(server::CONFIG_RESOURCE_TEMP_MEMORY, 300);
// auto resource_num = resconf.GetInt64Value(server::CONFIG_RESOURCE_NUM, 2);
// pinned_memory = 1024 * 1024 * pinned_memory;
// temp_memory = 1024 * 1024 * temp_memory;
// knowhere::FaissGpuResourceMgr::GetInstance().InitDevice(device_id,
// pinned_memory,
// temp_memory,
// resource_num);
// }
// }
//
// knowhere::FaissGpuResourceMgr::GetInstance().InitResource();
//
// auto connections = config.GetChild(server::CONFIG_RESOURCE_CONNECTIONS).GetChildren();
// if (connections.empty()) throw "connections config null exception";
// for (auto &conn : connections) {
// auto &connect_name = conn.first;
// auto &connect_conf = conn.second;
// auto connect_speed = connect_conf.GetInt64Value(server::CONFIG_SPEED_CONNECTIONS);
// auto connect_endpoint = connect_conf.GetValue(server::CONFIG_ENDPOINT_CONNECTIONS);
//
// std::string delimiter = "===";
// std::string left = connect_endpoint.substr(0, connect_endpoint.find(delimiter));
// std::string right = connect_endpoint.substr(connect_endpoint.find(delimiter) + 3,
// connect_endpoint.length());
//
// auto connection = Connection(connect_name, connect_speed);
// ResMgrInst::GetInstance()->Connect(left, right, connection);
// }
// } catch (const char *msg) {
// SERVER_LOG_ERROR << msg;
// // TODO(wxyu): throw exception instead
// exit(-1);
//// throw std::exception();
// }
}
void
StartSchedulerService() {
load_simple_config();
// load_advance_config();
ResMgrInst::GetInstance()->Start();
SchedInst::GetInstance()->Start();
JobMgrInst::GetInstance()->Start();
......
......@@ -23,10 +23,14 @@
#include "Scheduler.h"
#include "optimizer/HybridPass.h"
#include "optimizer/LargeSQ8HPass.h"
#include "optimizer/OnlyCPUPass.h"
#include "optimizer/OnlyGPUPass.h"
#include "optimizer/Optimizer.h"
#include "server/Config.h"
#include <memory>
#include <mutex>
#include <string>
#include <vector>
namespace milvus {
......@@ -93,8 +97,20 @@ class OptimizerInst {
if (instance == nullptr) {
std::lock_guard<std::mutex> lock(mutex_);
if (instance == nullptr) {
server::Config& config = server::Config::GetInstance();
std::vector<std::string> search_resources;
bool has_cpu = false;
config.GetResourceConfigSearchResources(search_resources);
for (auto& resource : search_resources) {
if (resource == "cpu") {
has_cpu = true;
}
}
std::vector<PassPtr> pass_list;
pass_list.push_back(std::make_shared<LargeSQ8HPass>());
pass_list.push_back(std::make_shared<HybridPass>());
pass_list.push_back(std::make_shared<OnlyCPUPass>());
pass_list.push_back(std::make_shared<OnlyGPUPass>(has_cpu));
instance = std::make_shared<Optimizer>(pass_list);
}
}
......
......@@ -26,10 +26,8 @@
namespace milvus {
namespace scheduler {
Scheduler::Scheduler(ResourceMgrWPtr res_mgr) : running_(false), res_mgr_(std::move(res_mgr)) {
if (auto mgr = res_mgr_.lock()) {
mgr->RegisterSubscriber(std::bind(&Scheduler::PostEvent, this, std::placeholders::_1));
}
Scheduler::Scheduler(ResourceMgrPtr res_mgr) : running_(false), res_mgr_(std::move(res_mgr)) {
res_mgr_->RegisterSubscriber(std::bind(&Scheduler::PostEvent, this, std::placeholders::_1));
event_register_.insert(std::make_pair(static_cast<uint64_t>(EventType::START_UP),
std::bind(&Scheduler::OnStartUp, this, std::placeholders::_1)));
event_register_.insert(std::make_pair(static_cast<uint64_t>(EventType::LOAD_COMPLETED),
......@@ -40,6 +38,10 @@ Scheduler::Scheduler(ResourceMgrWPtr res_mgr) : running_(false), res_mgr_(std::m
std::bind(&Scheduler::OnFinishTask, this, std::placeholders::_1)));
}
Scheduler::~Scheduler() {
res_mgr_ = nullptr;
}
void
Scheduler::Start() {
running_ = true;
......@@ -66,9 +68,13 @@ Scheduler::PostEvent(const EventPtr& event) {
event_cv_.notify_one();
}
std::string
Scheduler::Dump() {
return std::string();
json
Scheduler::Dump() const {
json ret{
{"running", running_},
{"event_queue_length", event_queue_.size()},
};
return ret;
}
void
......@@ -96,51 +102,45 @@ Scheduler::Process(const EventPtr& event) {
void
Scheduler::OnLoadCompleted(const EventPtr& event) {
auto load_completed_event = std::static_pointer_cast<LoadCompletedEvent>(event);
if (auto resource = event->resource_.lock()) {
resource->WakeupExecutor();
auto task_table_type = load_completed_event->task_table_item_->task->label()->Type();
switch (task_table_type) {
case TaskLabelType::DEFAULT: {
Action::DefaultLabelTaskScheduler(res_mgr_, resource, load_completed_event);
break;
}
case TaskLabelType::SPECIFIED_RESOURCE: {
Action::SpecifiedResourceLabelTaskScheduler(res_mgr_, resource, load_completed_event);
break;
}
case TaskLabelType::BROADCAST: {
if (resource->HasExecutor() == false) {
load_completed_event->task_table_item_->Move();
}
Action::PushTaskToAllNeighbour(load_completed_event->task_table_item_->task, resource);
break;
auto resource = event->resource_;
resource->WakeupExecutor();
auto task_table_type = load_completed_event->task_table_item_->task->label()->Type();
switch (task_table_type) {
case TaskLabelType::DEFAULT: {
Action::DefaultLabelTaskScheduler(res_mgr_, resource, load_completed_event);
break;
}
case TaskLabelType::SPECIFIED_RESOURCE: {
Action::SpecifiedResourceLabelTaskScheduler(res_mgr_, resource, load_completed_event);
break;
}
case TaskLabelType::BROADCAST: {
if (resource->HasExecutor() == false) {
load_completed_event->task_table_item_->Move();
}
default: { break; }
Action::PushTaskToAllNeighbour(load_completed_event->task_table_item_, resource);
break;
}
resource->WakeupLoader();
default: { break; }
}
resource->WakeupLoader();
}
void
Scheduler::OnStartUp(const EventPtr& event) {
if (auto resource = event->resource_.lock()) {
resource->WakeupLoader();
}
event->resource_->WakeupLoader();
}
void
Scheduler::OnFinishTask(const EventPtr& event) {
if (auto resource = event->resource_.lock()) {
resource->WakeupLoader();
}
event->resource_->WakeupLoader();
}
void
Scheduler::OnTaskTableUpdated(const EventPtr& event) {
if (auto resource = event->resource_.lock()) {
resource->WakeupLoader();
}
event->resource_->WakeupLoader();
}
} // namespace scheduler
......
......@@ -25,16 +25,18 @@
#include <unordered_map>
#include "ResourceMgr.h"
#include "interface/interfaces.h"
#include "resource/Resource.h"
#include "utils/Log.h"
namespace milvus {
namespace scheduler {
// TODO(wxyu): refactor, not friendly to unittest, logical in framework code
class Scheduler {
class Scheduler : public interface::dumpable {
public:
explicit Scheduler(ResourceMgrWPtr res_mgr);
explicit Scheduler(ResourceMgrPtr res_mgr);
~Scheduler();
Scheduler(const Scheduler&) = delete;
Scheduler(Scheduler&&) = delete;
......@@ -57,11 +59,8 @@ class Scheduler {
void
PostEvent(const EventPtr& event);
/*
* Dump as string;
*/
std::string
Dump();
json
Dump() const override;
private:
/******** Events ********/
......@@ -121,7 +120,7 @@ class Scheduler {
std::unordered_map<uint64_t, std::function<void(EventPtr)>> event_register_;
ResourceMgrWPtr res_mgr_;
ResourceMgrPtr res_mgr_;
std::queue<EventPtr> event_queue_;
std::thread worker_thread_;
std::mutex event_mutex_;
......
......@@ -70,8 +70,15 @@ TaskCreator::Create(const DeleteJobPtr& job) {
std::vector<TaskPtr>
TaskCreator::Create(const BuildIndexJobPtr& job) {
std::vector<TaskPtr> tasks;
// TODO(yukun): remove "disk" hardcode here
ResourcePtr res_ptr = ResMgrInst::GetInstance()->GetResource("disk");
server::Config& config = server::Config::GetInstance();
int32_t build_index_id;
Status stat = config.GetResourceConfigIndexBuildDevice(build_index_id);
ResourcePtr res_ptr;
if (build_index_id == server::CPU_DEVICE_ID) {
res_ptr = ResMgrInst::GetInstance()->GetResource("cpu");
} else {
res_ptr = ResMgrInst::GetInstance()->GetResource(ResourceType::GPU, build_index_id);
}
for (auto& to_index_file : job->to_index_files()) {
auto label = std::make_shared<SpecResLabel>(std::weak_ptr<Resource>(res_ptr));
......
此差异已折叠。
此差异已折叠。
......@@ -28,19 +28,20 @@ namespace scheduler {
class Action {
public:
static void
PushTaskToNeighbourRandomly(const TaskPtr& task, const ResourcePtr& self);
PushTaskToNeighbourRandomly(TaskTableItemPtr task_item, const ResourcePtr& self);
static void
PushTaskToAllNeighbour(const TaskPtr& task, const ResourcePtr& self);
PushTaskToAllNeighbour(TaskTableItemPtr task_item, const ResourcePtr& self);
static void
PushTaskToResource(const TaskPtr& task, const ResourcePtr& dest);
PushTaskToResource(TaskTableItemPtr task_item, const ResourcePtr& dest);
static void
DefaultLabelTaskScheduler(ResourceMgrWPtr res_mgr, ResourcePtr resource, std::shared_ptr<LoadCompletedEvent> event);
DefaultLabelTaskScheduler(const ResourceMgrPtr& res_mgr, ResourcePtr resource,
std::shared_ptr<LoadCompletedEvent> event);
static void
SpecifiedResourceLabelTaskScheduler(ResourceMgrWPtr res_mgr, ResourcePtr resource,
SpecifiedResourceLabelTaskScheduler(const ResourceMgrPtr& res_mgr, ResourcePtr resource,
std::shared_ptr<LoadCompletedEvent> event);
};
......
......@@ -30,7 +30,7 @@ class Resource;
class Event {
public:
explicit Event(EventType type, std::weak_ptr<Resource> resource) : type_(type), resource_(std::move(resource)) {
explicit Event(EventType type, std::shared_ptr<Resource> resource) : type_(type), resource_(std::move(resource)) {
}
inline EventType
......@@ -46,7 +46,7 @@ class Event {
public:
EventType type_;
std::weak_ptr<Resource> resource_;
std::shared_ptr<Resource> resource_;
};
using EventPtr = std::shared_ptr<Event>;
......
......@@ -29,7 +29,7 @@ namespace scheduler {
class FinishTaskEvent : public Event {
public:
FinishTaskEvent(std::weak_ptr<Resource> resource, TaskTableItemPtr task_table_item)
FinishTaskEvent(std::shared_ptr<Resource> resource, TaskTableItemPtr task_table_item)
: Event(EventType::FINISH_TASK, std::move(resource)), task_table_item_(std::move(task_table_item)) {
}
......
......@@ -29,7 +29,7 @@ namespace scheduler {
class LoadCompletedEvent : public Event {
public:
LoadCompletedEvent(std::weak_ptr<Resource> resource, TaskTableItemPtr task_table_item)
LoadCompletedEvent(std::shared_ptr<Resource> resource, TaskTableItemPtr task_table_item)
: Event(EventType::LOAD_COMPLETED, std::move(resource)), task_table_item_(std::move(task_table_item)) {
}
......
......@@ -28,7 +28,7 @@ namespace scheduler {
class StartUpEvent : public Event {
public:
explicit StartUpEvent(std::weak_ptr<Resource> resource) : Event(EventType::START_UP, std::move(resource)) {
explicit StartUpEvent(std::shared_ptr<Resource> resource) : Event(EventType::START_UP, std::move(resource)) {
}
inline std::string
......
......@@ -28,7 +28,7 @@ namespace scheduler {
class TaskTableUpdatedEvent : public Event {
public:
explicit TaskTableUpdatedEvent(std::weak_ptr<Resource> resource)
explicit TaskTableUpdatedEvent(std::shared_ptr<Resource> resource)
: Event(EventType::TASK_TABLE_UPDATED, std::move(resource)) {
}
......
......@@ -37,7 +37,7 @@ struct dumpable {
}
virtual json
Dump() = 0;
Dump() const = 0;
};
} // namespace interface
......
......@@ -23,8 +23,8 @@
namespace milvus {
namespace scheduler {
BuildIndexJob::BuildIndexJob(JobId id, engine::meta::MetaPtr meta_ptr, engine::DBOptions options)
: Job(id, JobType::BUILD), meta_ptr_(std::move(meta_ptr)), options_(std::move(options)) {
BuildIndexJob::BuildIndexJob(engine::meta::MetaPtr meta_ptr, engine::DBOptions options)
: Job(JobType::BUILD), meta_ptr_(std::move(meta_ptr)), options_(std::move(options)) {
}
bool
......@@ -50,9 +50,22 @@ void
BuildIndexJob::BuildIndexDone(size_t to_index_id) {
std::unique_lock<std::mutex> lock(mutex_);
to_index_files_.erase(to_index_id);
cv_.notify_all();
if (to_index_files_.empty()) {
cv_.notify_all();
}
SERVER_LOG_DEBUG << "BuildIndexJob " << id() << " finish index file: " << to_index_id;
}
json
BuildIndexJob::Dump() const {
json ret{
{"number_of_to_index_file", to_index_files_.size()},
};
auto base = Job::Dump();
ret.insert(base.begin(), base.end());
return ret;
}
} // namespace scheduler
} // namespace milvus
......@@ -41,7 +41,7 @@ using Id2ToTableFileMap = std::unordered_map<size_t, TableFileSchema>;
class BuildIndexJob : public Job {
public:
explicit BuildIndexJob(JobId id, engine::meta::MetaPtr meta_ptr, engine::DBOptions options);
explicit BuildIndexJob(engine::meta::MetaPtr meta_ptr, engine::DBOptions options);
public:
bool
......@@ -53,6 +53,9 @@ class BuildIndexJob : public Job {
void
BuildIndexDone(size_t to_index_id);
json
Dump() const override;
public:
Status&
GetStatus() {
......
......@@ -35,7 +35,7 @@ namespace scheduler {
class DeleteJob : public Job {
public:
DeleteJob(JobId id, std::string table_id, engine::meta::MetaPtr meta_ptr, uint64_t num_resource);
DeleteJob(std::string table_id, engine::meta::MetaPtr meta_ptr, uint64_t num_resource);
public:
void
......@@ -44,6 +44,9 @@ class DeleteJob : public Job {
void
ResourceDone();
json
Dump() const override;
public:
std::string
table_id() const {
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册