未验证 提交 9790fbdb 编写于 作者: J Jin Hai 提交者: GitHub

Merge branch '0.6.0' into 0.6.0

......@@ -19,6 +19,7 @@ Please mark all change in change log and use the ticket from JIRA.
- \#416 - Drop the same partition success repeatally
- \#440 - Query API in customization still uses old version
- \#440 - Server cannot startup with gpu_resource_config.enable=false in GPU version
- \#458 - Index data is not compatible between 0.5 and 0.6
## Feature
- \#12 - Pure CPU version for Milvus
......@@ -42,6 +43,7 @@ Please mark all change in change log and use the ticket from JIRA.
- \#404 - Add virtual method Init() in Pass abstract class
- \#409 - Add a Fallback pass in optimizer
- \#433 - C++ SDK query result is not easy to use
- \#449 - Add ShowPartitions example for C++ SDK
## Task
......
......@@ -5,10 +5,10 @@
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.1-yellowgreen)
![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen)
![Release_date](https://img.shields.io/badge/release%20date-November-yellowgreen)
[中文版](README_CN.md)
[中文版](README_CN.md) | [日本語版](README_JP.md)
## What is Milvus
......@@ -18,7 +18,7 @@ For more detailed introduction of Milvus and its architecture, see [Milvus overv
Milvus provides stable [Python](https://github.com/milvus-io/pymilvus), [Java](https://github.com/milvus-io/milvus-sdk-java) and [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIs.
Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.0/).
Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.3/).
## Get started
......@@ -52,11 +52,13 @@ We use [GitHub issues](https://github.com/milvus-io/milvus/issues) to track issu
To connect with other users and contributors, welcome to join our [Slack channel](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk).
## Thanks
## Contributors
We greatly appreciate the help of the following people.
Below is a list of Milvus contributors. We greatly appreciate your contributions!
- [akihoni](https://github.com/akihoni) found a broken link and a small typo in the README file.
- [akihoni](https://github.com/akihoni) provided the CN version of README, and found a broken link in the doc.
- [goodhamgupta](https://github.com/goodhamgupta) fixed a filename typo in the bootcamp doc.
- [erdustiggen](https://github.com/erdustiggen) changed from std::cout to LOG for error messages, and fixed a clang format issue as well as some grammatical errors.
## Resources
......@@ -64,6 +66,8 @@ We greatly appreciate the help of the following people.
- [Milvus bootcamp](https://github.com/milvus-io/bootcamp)
- [Milvus test reports](https://github.com/milvus-io/milvus/tree/master/docs)
- [Milvus Medium](https://medium.com/@milvusio)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
......@@ -74,6 +78,4 @@ We greatly appreciate the help of the following people.
## License
[Apache License 2.0](LICENSE)
[Apache License 2.0](LICENSE)
\ No newline at end of file
![Milvuslogo](https://raw.githubusercontent.com/milvus-io/docs/master/assets/milvus_logo.png)
[![Slack](https://img.shields.io/badge/Join-Slack-orange)](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.0-orange)
![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen)
![Release_date](https://img.shields.io/badge/release_date-October-yellowgreen)
- [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
- [Twitter](https://twitter.com/milvusio)
- [Facebook](https://www.facebook.com/io.milvus.5)
- [博客](https://www.milvus.io/blog/)
- [CSDN](https://zilliz.blog.csdn.net/)
- [中文官网](https://www.milvus.io/zh-CN/)
# 欢迎来到 Milvus
## Milvus 是什么
Milvus 是一款开源的、针对海量特征向量的相似性搜索引擎。基于异构众核计算框架设计,成本更低,性能更好。在有限的计算资源下,十亿向量搜索仅毫秒响应。
Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)[Java](https://github.com/milvus-io/milvus-sdk-java) 以及 [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) 的 API 接口。
通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.0/) 获取最新发行版本的 Milvus。
- 异构众核
Milvus 基于异构众核计算框架设计,成本更低,性能更好。
- 多元化索引
Milvus 支持多种索引方式,使用量化索引、基于树的索引和图索引等算法。
- 资源智能管理
Milvus 根据实际数据规模和可利用资源,智能调节优化查询计算和索引构建过程。
- 水平扩容
Milvus 支持在线 / 离线扩容,仅需执行简单命令,便可弹性伸缩计算节点和存储节点。
- 高可用性
Milvus 集成了 Kubernetes 框架,能有效避免单点障碍情况的发生。
- 简单易用
Milvus 安装简单,使用方便,并可使您专注于特征向量。
- 可视化监控
若要了解 Milvus 详细介绍和整体架构,请访问 [Milvus 简介](https://www.milvus.io/docs/zh-CN/aboutmilvus/overview/)
您可以使用基于 Prometheus 的图形化监控,以便实时跟踪系统性能
Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)[Java](https://github.com/milvus-io/milvus-sdk-java) 以及 C++ 的 API 接口
## 整体架构
![Milvus_arch](https://github.com/milvus-io/docs/blob/master/assets/milvus_arch.png)
通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.3/) 获取最新版本的功能和更新。
## 开始使用 Milvus
### 硬件要求
| 硬件设备 | 推荐配置 |
| -------- | ------------------------------------- |
| CPU | Intel CPU Haswell 及以上 |
| GPU | NVIDIA Pascal 系列及以上 |
| 内存 | 8 GB 或以上(取决于具体向量数据规模) |
| 硬盘 | SATA 3.0 SSD 及以上 |
### 使用 Docker
您可以方便地使用 Docker 安装 Milvus。具体请查看 [Milvus 安装指南](https://milvus.io/docs/zh-CN/userguide/install_milvus/)
### 从源代码编译
#### 软件要求
- Ubuntu 18.04 及以上
- CMake 3.14 及以上
- CUDA 10.0 及以上
- NVIDIA driver 418 及以上
#### 编译
##### 第一步 安装依赖项
```shell
$ cd [Milvus sourcecode path]/core
$ ./ubuntu_build_deps.sh
```
##### 第二步 编译
```shell
$ cd [Milvus sourcecode path]/core
$ ./build.sh -t Debug
or
$ ./build.sh -t Release
```
请参阅 [Milvus 安装指南](https://www.milvus.io/docs/zh-CN/userguide/install_milvus/) 使用 Docker 容器安装 Milvus。若要基于源码编译,请访问 [源码安装](install.md)
当您成功编译后,所有 Milvus 必需组件将安装在`[Milvus root path]/core/milvus`路径下。
##### 启动 Milvus 服务
```shell
$ cd [Milvus root path]/core/milvus
```
`LD_LIBRARY_PATH` 中添加 `lib/` 目录:
```shell
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/milvus/lib
```
启动 Milvus 服务:
```shell
$ cd scripts
$ ./start_server.sh
```
若要停止 Milvus 服务,请使用如下命令:
```shell
$ ./stop_server.sh
```
若需要修改 Milvus 配置文件 `conf/server_config.yaml``conf/log_config.conf`,请查看 [Milvus 配置](https://milvus.io/docs/zh-CN/reference/milvus_config/)
若要更改 Milvus 设置,请参阅 [Milvus 配置](https://www.milvus.io/docs/zh-CN/reference/milvus_config/)
### 开始您的第一个 Milvus 程序
#### 运行 Python 示例代码
请确保系统的 Python 版本为 [Python 3.5](https://www.python.org/downloads/) 或以上。
安装 Milvus Python SDK。
```shell
# Install Milvus Python SDK
$ pip install pymilvus==0.2.3
```
您可以尝试用 [Python](https://www.milvus.io/docs/en/userguide/example_code/)[Java example code](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples) 运行 Milvus 示例代码。
创建 `example.py` 文件,并向文件中加入 [Python 示例代码](https://github.com/milvus-io/pymilvus/blob/master/examples/advanced_example.py)
运行示例代码
```shell
# Run Milvus Python example
$ python3 example.py
```
#### 运行 C++ 示例代码
若要使用 C++ 示例代码,请使用以下命令:
```shell
# Run Milvus C++ example
......@@ -157,41 +37,44 @@ $ python3 example.py
$ ./sdk_simple
```
#### 运行 Java 示例代码
请确保系统的 Java 版本为 Java 8 或以上。
## 路线图
[此处](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples)获取 Java 示例代码
阅读我们的[路线图](https://milvus.io/docs/zh-CN/roadmap/)以了解更多即将开发的新功能
## 贡献者指南
我们由衷欢迎您推送贡献。关于贡献流程的详细信息,请参阅 [贡献者指南](https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md)。本项目遵循 Milvus [行为准则](https://github.com/milvus-io/milvus/blob/master/CODE_OF_CONDUCT.md)。如果您希望参与本项目,请遵守该准则的内容。
我们由衷欢迎您推送贡献。关于贡献流程的详细信息,请参阅[贡献者指南](https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md)。本项目遵循 Milvus [行为准则](https://github.com/milvus-io/milvus/blob/master/CODE_OF_CONDUCT.md)。如果您希望参与本项目,请遵守该准则的内容。
我们使用 [GitHub issues](https://github.com/milvus-io/milvus/issues/new/choose) 追踪问题和补丁。若您希望提出问题或进行讨论,请加入我们的社区。
我们使用 [GitHub issues](https://github.com/milvus-io/milvus/issues) 追踪问题和补丁。若您希望提出问题或进行讨论,请加入我们的社区。
## 加入 Milvus 社区
欢迎加入我们的 [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk) 以便与其他用户和贡献者进行交流。
欢迎加入我们的 [Slack 频道](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)以便与其他用户和贡献者进行交流。
## Milvus 路线图
## 贡献者
请阅读我们的[路线图](https://milvus.io/docs/zh-CN/roadmap/)以获得更多即将开发的新功能。
以下是 Milvus 贡献者名单,在此我们深表感谢:
- [akihoni](https://github.com/akihoni) 提供了中文版 README,并发现了 README 中的无效链接。
- [goodhamgupta](https://github.com/goodhamgupta) 发现并修正了在线训练营文档中的文件名拼写错误。
- [erdustiggen](https://github.com/erdustiggen) 将错误信息里的 std::cout 修改为 LOG,修正了一个 Clang 格式问题和一些语法错误。
## 相关链接
[Milvus 官方网站](https://www.milvus.io/)
- [Milvus.io](https://www.milvus.io)
- [Milvus 在线训练营](https://github.com/milvus-io/bootcamp)
[Milvus 文档](https://www.milvus.io/docs/en/userguide/install_milvus/)
- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs)
[Milvus 在线训练营](https://github.com/milvus-io/bootcamp)
- [Milvus Medium](https://medium.com/@milvusio)
[Milvus 博客](https://www.milvus.io/blog/)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
[Milvus CSDN](https://zilliz.blog.csdn.net/)
- [Milvus Twitter](https://twitter.com/milvusio)
[Milvus 路线图](https://milvus.io/docs/en/roadmap/)
- [Milvus Facebook](https://www.facebook.com/io.milvus.5)
## 许可协议
[Apache 许可协议2.0版](https://github.com/milvus-io/milvus/blob/master/LICENSE)
![Milvuslogo](https://github.com/milvus-io/docs/blob/master/assets/milvus_logo.png)
[![Slack](https://img.shields.io/badge/Join-Slack-orange)](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)
![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen)
![Language](https://img.shields.io/badge/language-C%2B%2B-blue)
[![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master)
![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen)
![Release_date](https://img.shields.io/badge/release%20date-November-yellowgreen)
# Milvus へようこそ
## 概要
Milvusは世界中一番早い特徴ベクトルにむかう類似性検索エンジンです。不均質な計算アーキテクチャーに基づいて効率を最大化出来ます。数十億のベクタの中に目標を検索できるまで数ミリ秒しかかからず、最低限の計算資源だけが必要です。
Milvusは安定的な[Python](https://github.com/milvus-io/pymilvus)[Java](https://github.com/milvus-io/milvus-sdk-java)又は [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIsを提供します。
Milvus [リリースノート](https://milvus.io/docs/en/release/v0.5.3/)を読んで最新バージョンや更新情報を手に入れます。
## はじめに
DockerでMilvusをインストールすることは簡単です。[Milvusインストール案内](https://milvus.io/docs/en/userguide/install_milvus/) を参考してください。ソースからMilvusを構築するために、[ソースから構築する](install.md)を参考してください。
Milvusをコンフィグするために、[Milvusコンフィグ](https://github.com/milvus-io/docs/blob/master/reference/milvus_config.md)を読んでください。
### 初めてのMilvusプログラムを試す
[Python](https://www.milvus.io/docs/en/userguide/example_code/)[Java](https://github.com/milvus-io/milvus-sdk-java/tree/master/examples)などのサンプルコードを使ってMilvusプログラムを試す。
C++サンプルコードを実行するために、次のコマンドをつかってください。
```shell
# Run Milvus C++ example
$ cd [Milvus root path]/core/milvus/bin
$ ./sdk_simple
```
## Milvusロードマップ
[ロードマップ](https://milvus.io/docs/en/roadmap/)を読んで、追加する予定の特性が分かります。
## 貢献規約
本プロジェクトへの貢献に心より感謝いたします。 Milvusを貢献したいと思うなら、[貢献規約](CONTRIBUTING.md)を読んでください。 本プロジェクトはMilvusの[行動規範](CODE_OF_CONDUCT.md)に従います。プロジェクトに参加したい場合は、行動規範を従ってください。
[GitHub issues](https://github.com/milvus-io/milvus/issues) を使って問題やバッグなとを報告しでください。 一般てきな問題なら, Milvusコミュニティに参加してください。
## Milvusコミュニティを参加する
他の貢献者と交流したい場合は、Milvusの [slackチャンネル](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk)に参加してください。
## 参考情報
- [Milvus.io](https://www.milvus.io)
- [Milvus](https://github.com/milvus-io/bootcamp)
- [Milvus テストレポート](https://github.com/milvus-io/milvus/tree/master/docs)
- [Milvus Medium](https://medium.com/@milvusio)
- [Milvus CSDN](https://zilliz.blog.csdn.net/)
- [Milvus ツイッター](https://twitter.com/milvusio)
- [Milvus Facebook](https://www.facebook.com/io.milvus.5)
## ライセンス
[Apache 2.0ライセンス](LICENSE)
\ No newline at end of file
def FileTransfer (sourceFiles, remoteDirectory, remoteIP, protocol = "ftp", makeEmptyDirs = true) {
if (protocol == "ftp") {
ftpPublisher masterNodeName: '', paramPublish: [parameterName: ''], alwaysPublishFromMaster: false, continueOnError: false, failOnError: true, publishers: [
[configName: "${remoteIP}", transfers: [
[asciiMode: false, cleanRemote: false, excludes: '', flatten: false, makeEmptyDirs: "${makeEmptyDirs}", noDefaultExcludes: false, patternSeparator: '[, ]+', remoteDirectory: "${remoteDirectory}", remoteDirectorySDF: false, removePrefix: '', sourceFiles: "${sourceFiles}"]], usePromotionTimestamp: true, useWorkspaceInPromotion: false, verbose: true
]
]
}
}
return this
......@@ -53,7 +53,7 @@ pipeline {
stage("Run Build") {
agent {
kubernetes {
label "${BINRARY_VERSION}-build"
label "${env.BINRARY_VERSION}-build"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml'
}
......@@ -62,7 +62,7 @@ pipeline {
stages {
stage('Build') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/build.groovy"
}
......@@ -71,7 +71,7 @@ pipeline {
}
stage('Code Coverage') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy"
}
......@@ -80,7 +80,7 @@ pipeline {
}
stage('Upload Package') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/package.groovy"
}
......@@ -93,7 +93,7 @@ pipeline {
stage("Publish docker images") {
agent {
kubernetes {
label "${BINRARY_VERSION}-publish"
label "${env.BINRARY_VERSION}-publish"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/docker-pod.yaml'
}
......@@ -113,9 +113,13 @@ pipeline {
}
stage("Deploy to Development") {
environment {
HELM_RELEASE_NAME = "${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}".toLowerCase()
}
agent {
kubernetes {
label "${BINRARY_VERSION}-dev-test"
label "${env.BINRARY_VERSION}-dev-test"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/testEnvironment.yaml'
}
......@@ -183,7 +187,7 @@ pipeline {
stage("Run Build") {
agent {
kubernetes {
label "${BINRARY_VERSION}-build"
label "${env.BINRARY_VERSION}-build"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml'
}
......@@ -192,7 +196,7 @@ pipeline {
stages {
stage('Build') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/build.groovy"
}
......@@ -201,7 +205,7 @@ pipeline {
}
stage('Code Coverage') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy"
}
......@@ -210,7 +214,7 @@ pipeline {
}
stage('Upload Package') {
steps {
container('milvus-build-env') {
container("milvus-${env.BINRARY_VERSION}-build-env") {
script {
load "${env.WORKSPACE}/ci/jenkins/step/package.groovy"
}
......@@ -223,7 +227,7 @@ pipeline {
stage("Publish docker images") {
agent {
kubernetes {
label "${BINRARY_VERSION}-publish"
label "${env.BINRARY_VERSION}-publish"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/docker-pod.yaml'
}
......@@ -243,9 +247,13 @@ pipeline {
}
stage("Deploy to Development") {
environment {
HELM_RELEASE_NAME = "${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}".toLowerCase()
}
agent {
kubernetes {
label "${BINRARY_VERSION}-dev-test"
label "${env.BINRARY_VERSION}-dev-test"
defaultContainer 'jnlp'
yamlFile 'ci/jenkins/pod/testEnvironment.yaml'
}
......
......@@ -7,7 +7,7 @@ metadata:
componet: cpu-build-env
spec:
containers:
- name: milvus-build-env
- name: milvus-cpu-build-env
image: registry.zilliz.com/milvus/milvus-cpu-build-env:v0.6.0-ubuntu18.04
env:
- name: POD_IP
......
......@@ -7,7 +7,7 @@ metadata:
componet: gpu-build-env
spec:
containers:
- name: milvus-build-env
- name: milvus-gpu-build-env
image: registry.zilliz.com/milvus/milvus-gpu-build-env:v0.6.0-ubuntu18.04
env:
- name: POD_IP
......
try {
def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true
def helmResult = sh script: "helm status ${env.HELM_RELEASE_NAME}", returnStatus: true
if (!helmResult) {
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}"
sh "helm del --purge ${env.HELM_RELEASE_NAME}"
}
} catch (exc) {
def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true
def helmResult = sh script: "helm status ${env.HELM_RELEASE_NAME}", returnStatus: true
if (!helmResult) {
sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}"
sh "helm del --purge ${env.HELM_RELEASE_NAME}"
}
throw exc
}
......@@ -3,7 +3,7 @@ sh 'helm repo update'
dir ('milvus-helm') {
checkout([$class: 'GitSCM', branches: [[name: "0.6.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.6.0:refs/remotes/origin/0.6.0"]]])
dir ("milvus") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
}
}
......@@ -2,6 +2,7 @@ timeout(time: 5, unit: 'MINUTES') {
dir ("ci/jenkins/scripts") {
sh "pip3 install -r requirements.txt"
sh "./yaml_processor.py merge -f /opt/milvus/conf/server_config.yaml -m ../yaml/update_server_config.yaml -i && rm /opt/milvus/conf/server_config.yaml.bak"
sh "sed -i 's/\/tmp\/milvus/\/opt\/milvus/g' /opt/milvus/conf/log_config.conf"
}
sh "tar -zcvf ./${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz -C /opt/ milvus"
withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) {
......
container('publish-images') {
timeout(time: 15, unit: 'MINUTES') {
dir ("docker/deploy/${env.BINRARY_VERSION}/${env.OS_NAME}") {
def binaryPackage = "${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz"
timeout(time: 15, unit: 'MINUTES') {
dir ("docker/deploy/${env.BINRARY_VERSION}/${env.OS_NAME}") {
def binaryPackage = "${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz"
withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) {
def downloadStatus = sh(returnStatus: true, script: "curl -u${JFROG_USERNAME}:${JFROG_PASSWORD} -O ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage}")
withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) {
def downloadStatus = sh(returnStatus: true, script: "curl -u${JFROG_USERNAME}:${JFROG_PASSWORD} -O ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage}")
if (downloadStatus != 0) {
error("\" Download \" ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage} \" failed!")
}
if (downloadStatus != 0) {
error("\" Download \" ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage} \" failed!")
}
sh "tar zxvf ${binaryPackage}"
def imageName = "${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
sh "tar zxvf ${binaryPackage}"
def imageName = "${PROJECT_NAME}/engine:${DOCKER_VERSION}"
try {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
try {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
def customImage = docker.build("${imageName}")
def customImage = docker.build("${imageName}")
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
docker.withRegistry("https://${params.DOKCER_REGISTRY_URL}", "${params.DOCKER_CREDENTIALS_ID}") {
customImage.push()
}
} catch (exc) {
throw exc
} finally {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
docker.withRegistry("https://${params.DOKCER_REGISTRY_URL}", "${params.DOCKER_CREDENTIALS_ID}") {
customImage.push()
}
} catch (exc) {
throw exc
} finally {
def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null")
if (isExistSourceImage == 0) {
def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}")
}
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null")
if (isExistTargeImage == 0) {
def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}")
}
}
}
}
}
timeout(time: 90, unit: 'MINUTES') {
dir ("tests/milvus_python_test") {
sh 'python3 -m pip install -r requirements.txt'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
}
// mysql database backend test
load "ci/jenkins/jenkinsfile/cleanupSingleDev.groovy"
......@@ -13,10 +13,10 @@ timeout(time: 90, unit: 'MINUTES') {
}
dir ("milvus-helm") {
dir ("milvus") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
}
}
dir ("tests/milvus_python_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
}
}
timeout(time: 60, unit: 'MINUTES') {
dir ("tests/milvus_python_test") {
sh 'python3 -m pip install -r requirements.txt'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local"
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
}
// mysql database backend test
......@@ -14,10 +14,10 @@ timeout(time: 60, unit: 'MINUTES') {
// }
// dir ("milvus-helm") {
// dir ("milvus") {
// sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
// sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ."
// }
// }
// dir ("tests/milvus_python_test") {
// sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local"
// sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local"
// }
}
try {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
}
} catch (exc) {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
}
throw exc
}
try {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
}
} catch (exc) {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
}
throw exc
}
try {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster"
}
} catch (exc) {
def result = sh script: "helm status ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster", returnStatus: true
if (!result) {
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster"
}
throw exc
}
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo add milvus https://registry.zilliz.com/chartrepo/milvus'
sh 'helm repo update'
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("milvus/milvus-cluster") {
sh "helm install --wait --timeout 300 --set roServers.image.tag=${DOCKER_VERSION} --set woServers.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP -f ci/values.yaml --name ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster --namespace milvus-cluster --version 0.5.0 . "
}
}
/*
timeout(time: 2, unit: 'MINUTES') {
waitUntil {
def result = sh script: "nc -z -w 3 ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster-milvus-cluster-proxy.milvus-cluster.svc.cluster.local 19530", returnStatus: true
return !result
}
}
*/
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster"
throw exc
}
timeout(time: 25, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements_cluster.txt'
sh "pytest . --alluredir=cluster_test_out --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-cluster-milvus-cluster-proxy.milvus-cluster.svc.cluster.local"
}
} catch (exc) {
echo 'Milvus Cluster Test Failed !'
throw exc
}
}
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo add milvus https://registry.zilliz.com/chartrepo/milvus'
sh 'helm repo update'
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/values.yaml --namespace milvus-1 --version 0.5.0 ."
}
}
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
throw exc
}
try {
sh 'helm init --client-only --skip-refresh --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts'
sh 'helm repo add milvus https://registry.zilliz.com/chartrepo/milvus'
sh 'helm repo update'
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.repository=\"zilliz.azurecr.cn/milvus/engine\" --set engine.image.tag=${DOCKER_VERSION} --set expose.type=loadBalancer --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/values.yaml --namespace milvus-1 --version 0.5.0 ."
}
}
} catch (exc) {
echo 'Helm running failed!'
sh "helm del --purge ${env.JOB_NAME}-${env.BUILD_NUMBER}"
throw exc
}
timeout(time: 30, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt -i http://pypi.douban.com/simple --trusted-host pypi.douban.com'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local --internal=true"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
}
}
dir ("milvus-helm") {
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/db_backend/mysql_values.yaml --namespace milvus-2 --version 0.5.0 ."
}
}
dir ("${PROJECT_NAME}_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local --internal=true"
}
} catch (exc) {
echo 'Milvus Test Failed !'
throw exc
}
}
timeout(time: 60, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt -i http://pypi.douban.com/simple --trusted-host pypi.douban.com'
sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-1.svc.cluster.local --internal=true"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
}
}
dir ("milvus-helm") {
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/db_backend/mysql_values.yaml --namespace milvus-2 --version 0.4.0 ."
}
}
dir ("${PROJECT_NAME}_test") {
sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine.milvus-2.svc.cluster.local --internal=true"
}
} catch (exc) {
echo 'Milvus Test Failed !'
throw exc
}
}
container('milvus-build-env') {
timeout(time: 120, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Build Engine') {
dir ("milvus_engine") {
try {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption',disableSubmodules: false,parentCredentials: true,recursiveSubmodules: true,reference: '',trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("core") {
sh "git config --global user.email \"test@zilliz.com\""
sh "git config --global user.name \"test\""
withCredentials([usernamePassword(credentialsId: "${params.JFROG_USER}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh "./build.sh -l"
sh "rm -rf cmake_build"
sh "export JFROG_ARTFACTORY_URL='${params.JFROG_ARTFACTORY_URL}' \
&& export JFROG_USER_NAME='${USERNAME}' \
&& export JFROG_PASSWORD='${PASSWORD}' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.3.0/faiss-branch-0.3.0.tar.gz' \
&& ./build.sh -t ${params.BUILD_TYPE} -d /opt/milvus -j -u -c"
sh "./coverage.sh -u root -p 123456 -t \$POD_IP"
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Build Engine', state: 'failed'
throw exc
}
}
}
}
}
container('milvus-build-env') {
timeout(time: 120, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Build Engine') {
dir ("milvus_engine") {
try {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption',disableSubmodules: false,parentCredentials: true,recursiveSubmodules: true,reference: '',trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("core") {
sh "git config --global user.email \"test@zilliz.com\""
sh "git config --global user.name \"test\""
withCredentials([usernamePassword(credentialsId: "${params.JFROG_USER}", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh "./build.sh -l"
sh "rm -rf cmake_build"
sh "export JFROG_ARTFACTORY_URL='${params.JFROG_ARTFACTORY_URL}' \
&& export JFROG_USER_NAME='${USERNAME}' \
&& export JFROG_PASSWORD='${PASSWORD}' \
&& export FAISS_URL='http://192.168.1.105:6060/jinhai/faiss/-/archive/branch-0.3.0/faiss-branch-0.3.0.tar.gz' \
&& ./build.sh -t ${params.BUILD_TYPE} -j -d /opt/milvus"
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Build Engine', state: 'failed'
throw exc
}
}
}
}
}
container('publish-docker') {
timeout(time: 15, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Publish Engine Docker') {
try {
dir ("${PROJECT_NAME}_build") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:build/milvus_build.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("docker/deploy/ubuntu16.04/free_version") {
sh "curl -O -u anonymous: ftp://192.168.1.126/data/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
sh "tar zxvf ${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
try {
def customImage = docker.build("${PROJECT_NAME}/engine:${DOCKER_VERSION}")
docker.withRegistry('https://registry.zilliz.com', "${params.DOCKER_PUBLISH_USER}") {
customImage.push()
}
docker.withRegistry('https://zilliz.azurecr.cn', "${params.AZURE_DOCKER_PUBLISH_USER}") {
customImage.push()
}
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'success'
echo "Docker Pull Command: docker pull registry.zilliz.com/${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'canceled'
throw exc
} finally {
sh "docker rmi ${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'failed'
echo 'Publish docker failed!'
throw exc
}
}
}
}
def notify() {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
return this
container('milvus-build-env') {
timeout(time: 5, unit: 'MINUTES') {
dir ("milvus_engine") {
dir ("core") {
gitlabCommitStatus(name: 'Packaged Engine') {
if (fileExists('milvus')) {
try {
sh "tar -zcvf ./${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz ./milvus"
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz", "${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Download Milvus Engine Binary Viewer \"http://192.168.1.126:8080/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz\""
}
} catch (exc) {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
throw exc
}
} else {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
error("Milvus binary directory don't exists!")
}
}
gitlabCommitStatus(name: 'Packaged Engine lcov') {
if (fileExists('lcov_out')) {
try {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("lcov_out/", "${PROJECT_NAME}/lcov/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus lcov out Viewer \"http://192.168.1.126:8080/${PROJECT_NAME}/lcov/${JOB_NAME}-${BUILD_ID}/lcov_out/\""
}
} catch (exc) {
updateGitlabCommitStatus name: 'Packaged Engine lcov', state: 'failed'
throw exc
}
} else {
updateGitlabCommitStatus name: 'Packaged Engine lcov', state: 'failed'
error("Milvus lcov out directory don't exists!")
}
}
}
}
}
}
container('milvus-build-env') {
timeout(time: 5, unit: 'MINUTES') {
dir ("milvus_engine") {
dir ("core") {
gitlabCommitStatus(name: 'Packaged Engine') {
if (fileExists('milvus')) {
try {
sh "tar -zcvf ./${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz ./milvus"
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz", "${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Download Milvus Engine Binary Viewer \"http://192.168.1.126:8080/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz\""
}
} catch (exc) {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
throw exc
}
} else {
updateGitlabCommitStatus name: 'Packaged Engine', state: 'failed'
error("Milvus binary directory don't exists!")
}
}
}
}
}
}
container('publish-docker') {
timeout(time: 15, unit: 'MINUTES') {
gitlabCommitStatus(name: 'Publish Engine Docker') {
try {
dir ("${PROJECT_NAME}_build") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:build/milvus_build.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
dir ("docker/deploy/ubuntu16.04/free_version") {
sh "curl -O -u anonymous: ftp://192.168.1.126/data/${PROJECT_NAME}/engine/${JOB_NAME}-${BUILD_ID}/${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
sh "tar zxvf ${PROJECT_NAME}-engine-${PACKAGE_VERSION}.tar.gz"
try {
def customImage = docker.build("${PROJECT_NAME}/engine:${DOCKER_VERSION}")
docker.withRegistry('https://registry.zilliz.com', "${params.DOCKER_PUBLISH_USER}") {
customImage.push()
}
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'success'
echo "Docker Pull Command: docker pull registry.zilliz.com/${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'canceled'
throw exc
} finally {
sh "docker rmi ${PROJECT_NAME}/engine:${DOCKER_VERSION}"
}
}
}
} catch (exc) {
updateGitlabCommitStatus name: 'Publish Engine Docker', state: 'failed'
echo 'Publish docker failed!'
throw exc
}
}
}
}
timeout(time: 40, unit: 'MINUTES') {
try {
dir ("${PROJECT_NAME}_test") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:Test/milvus_test.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
sh 'python3 -m pip install -r requirements.txt'
def service_ip = sh (script: "kubectl get svc --namespace milvus-1 ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine --template \"{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}\"",returnStdout: true).trim()
sh "pytest . --alluredir=\"test_out/staging/single/sqlite\" --ip ${service_ip}"
}
// mysql database backend test
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_staging.groovy"
if (!fileExists('milvus-helm')) {
dir ("milvus-helm") {
checkout([$class: 'GitSCM', branches: [[name: "${SEMVER}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: "${params.GIT_USER}", url: "git@192.168.1.105:megasearch/milvus-helm.git", name: 'origin', refspec: "+refs/heads/${SEMVER}:refs/remotes/origin/${SEMVER}"]]])
}
}
dir ("milvus-helm") {
dir ("milvus/milvus-gpu") {
sh "helm install --wait --timeout 300 --set engine.image.repository=\"zilliz.azurecr.cn/milvus/engine\" --set engine.image.tag=${DOCKER_VERSION} --set expose.type=loadBalancer --name ${env.JOB_NAME}-${env.BUILD_NUMBER} -f ci/db_backend/mysql_values.yaml --namespace milvus-2 --version 0.5.0 ."
}
}
dir ("${PROJECT_NAME}_test") {
def service_ip = sh (script: "kubectl get svc --namespace milvus-2 ${env.JOB_NAME}-${env.BUILD_NUMBER}-milvus-gpu-engine --template \"{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}\"",returnStdout: true).trim()
sh "pytest . --alluredir=\"test_out/staging/single/mysql\" --ip ${service_ip}"
}
} catch (exc) {
echo 'Milvus Test Failed !'
throw exc
}
}
timeout(time: 5, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
if (fileExists('cluster_test_out')) {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("cluster_test_out/", "${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus Dev Test Out Viewer \"ftp://192.168.1.126/data/${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}\""
}
} else {
error("Milvus Dev Test Out directory don't exists!")
}
}
}
timeout(time: 5, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
if (fileExists('test_out/dev')) {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("test_out/dev/", "${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus Dev Test Out Viewer \"ftp://192.168.1.126/data/${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}\""
}
} else {
error("Milvus Dev Test Out directory don't exists!")
}
}
}
timeout(time: 5, unit: 'MINUTES') {
dir ("${PROJECT_NAME}_test") {
if (fileExists('test_out/staging')) {
def fileTransfer = load "${env.WORKSPACE}/ci/function/file_transfer.groovy"
fileTransfer.FileTransfer("test_out/staging/", "${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}", 'nas storage')
if (currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
echo "Milvus Dev Test Out Viewer \"ftp://192.168.1.126/data/${PROJECT_NAME}/test/${JOB_NAME}-${BUILD_ID}\""
}
} else {
error("Milvus Dev Test Out directory don't exists!")
}
}
}
pipeline {
agent none
options {
timestamps()
}
environment {
PROJECT_NAME = "milvus"
LOWER_BUILD_TYPE = BUILD_TYPE.toLowerCase()
SEMVER = "${env.gitlabSourceBranch == null ? params.ENGINE_BRANCH.substring(params.ENGINE_BRANCH.lastIndexOf('/') + 1) : env.gitlabSourceBranch}"
GITLAB_AFTER_COMMIT = "${env.gitlabAfter == null ? null : env.gitlabAfter}"
SUFFIX_VERSION_NAME = "${env.gitlabAfter == null ? null : env.gitlabAfter.substring(0, 6)}"
DOCKER_VERSION_STR = "${env.gitlabAfter == null ? "${SEMVER}-${LOWER_BUILD_TYPE}" : "${SEMVER}-${LOWER_BUILD_TYPE}-${SUFFIX_VERSION_NAME}"}"
}
stages {
stage("Ubuntu 16.04") {
environment {
PACKAGE_VERSION = VersionNumber([
versionNumberString : '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, "yyyyMMdd"}'
]);
DOCKER_VERSION = VersionNumber([
versionNumberString : '${DOCKER_VERSION_STR}'
]);
}
stages {
stage("Run Build") {
agent {
kubernetes {
cloud 'build-kubernetes'
label 'build'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: milvus-build-env
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.13
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- cat
tty: true
resources:
limits:
memory: "28Gi"
cpu: "10.0"
nvidia.com/gpu: 1
requests:
memory: "14Gi"
cpu: "5.0"
- name: milvus-mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
"""
}
}
stages {
stage('Build') {
steps {
gitlabCommitStatus(name: 'Build') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/milvus_build.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/packaged_milvus.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Build', state: 'canceled'
echo "Milvus Build aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Build', state: 'failed'
echo "Milvus Build failure !"
}
}
}
}
stage("Publish docker and helm") {
agent {
kubernetes {
label 'publish'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Publish Docker') {
steps {
gitlabCommitStatus(name: 'Publish Docker') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/publish_docker.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'canceled'
echo "Milvus Publish Docker aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'failed'
echo "Milvus Publish Docker failure !"
}
}
}
}
stage("Deploy to Development") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: test-cluster-config
"""
}
}
stages {
stage("Deploy to Dev") {
steps {
gitlabCommitStatus(name: 'Deloy to Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2dev.groovy"
}
}
}
}
}
stage("Dev Test") {
steps {
gitlabCommitStatus(name: 'Deloy Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/dev_test.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Dev") {
steps {
gitlabCommitStatus(name: 'Cleanup Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
success {
script {
echo "Milvus Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Single Node CI/CD failure !"
}
}
}
}
// stage("Cluster") {
// agent {
// kubernetes {
// label 'dev-test'
// defaultContainer 'jnlp'
// yaml """
// apiVersion: v1
// kind: Pod
// metadata:
// labels:
// app: milvus
// componet: test
// spec:
// containers:
// - name: milvus-testframework
// image: registry.zilliz.com/milvus/milvus-test:v0.2
// command:
// - cat
// tty: true
// volumeMounts:
// - name: kubeconf
// mountPath: /root/.kube/
// readOnly: true
// volumes:
// - name: kubeconf
// secret:
// secretName: test-cluster-config
// """
// }
// }
// stages {
// stage("Deploy to Dev") {
// steps {
// gitlabCommitStatus(name: 'Deloy to Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_deploy2dev.groovy"
// }
// }
// }
// }
// }
// stage("Dev Test") {
// steps {
// gitlabCommitStatus(name: 'Deloy Test') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_dev_test.groovy"
// load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_cluster_test_out.groovy"
// }
// }
// }
// }
// }
// stage ("Cleanup Dev") {
// steps {
// gitlabCommitStatus(name: 'Cleanup Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// }
// }
// }
// post {
// always {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// success {
// script {
// echo "Milvus Cluster CI/CD success !"
// }
// }
// aborted {
// script {
// echo "Milvus Cluster CI/CD aborted !"
// }
// }
// failure {
// script {
// echo "Milvus Cluster CI/CD failure !"
// }
// }
// }
// }
}
}
}
}
}
post {
always {
script {
if (env.gitlabAfter != null) {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
}
}
success {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'success'
echo "Milvus CI/CD success !"
}
}
aborted {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'canceled'
echo "Milvus CI/CD aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'failed'
echo "Milvus CI/CD failure !"
}
}
}
}
pipeline {
agent none
options {
timestamps()
}
environment {
PROJECT_NAME = "milvus"
LOWER_BUILD_TYPE = BUILD_TYPE.toLowerCase()
SEMVER = "${env.gitlabSourceBranch == null ? params.ENGINE_BRANCH.substring(params.ENGINE_BRANCH.lastIndexOf('/') + 1) : env.gitlabSourceBranch}"
GITLAB_AFTER_COMMIT = "${env.gitlabAfter == null ? null : env.gitlabAfter}"
SUFFIX_VERSION_NAME = "${env.gitlabAfter == null ? null : env.gitlabAfter.substring(0, 6)}"
DOCKER_VERSION_STR = "${env.gitlabAfter == null ? "${SEMVER}-${LOWER_BUILD_TYPE}" : "${SEMVER}-${LOWER_BUILD_TYPE}-${SUFFIX_VERSION_NAME}"}"
}
stages {
stage("Ubuntu 16.04") {
environment {
PACKAGE_VERSION = VersionNumber([
versionNumberString : '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, "yyyyMMdd"}'
]);
DOCKER_VERSION = VersionNumber([
versionNumberString : '${DOCKER_VERSION_STR}'
]);
}
stages {
stage("Run Build") {
agent {
kubernetes {
cloud 'build-kubernetes'
label 'build'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: milvus-build-env
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.13
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- cat
tty: true
resources:
limits:
memory: "28Gi"
cpu: "10.0"
nvidia.com/gpu: 1
requests:
memory: "14Gi"
cpu: "5.0"
- name: milvus-mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
ports:
- containerPort: 3306
name: mysql
"""
}
}
stages {
stage('Build') {
steps {
gitlabCommitStatus(name: 'Build') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/milvus_build_no_ut.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/packaged_milvus_no_ut.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Build', state: 'canceled'
echo "Milvus Build aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Build', state: 'failed'
echo "Milvus Build failure !"
}
}
}
}
stage("Publish docker and helm") {
agent {
kubernetes {
label 'publish'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Publish Docker') {
steps {
gitlabCommitStatus(name: 'Publish Docker') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/publish_docker.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'canceled'
echo "Milvus Publish Docker aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'failed'
echo "Milvus Publish Docker failure !"
}
}
}
}
stage("Deploy to Development") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: test-cluster-config
"""
}
}
stages {
stage("Deploy to Dev") {
steps {
gitlabCommitStatus(name: 'Deloy to Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2dev.groovy"
}
}
}
}
}
stage("Dev Test") {
steps {
gitlabCommitStatus(name: 'Deloy Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/dev_test.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Dev") {
steps {
gitlabCommitStatus(name: 'Cleanup Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
success {
script {
echo "Milvus Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Single Node CI/CD failure !"
}
}
}
}
// stage("Cluster") {
// agent {
// kubernetes {
// label 'dev-test'
// defaultContainer 'jnlp'
// yaml """
// apiVersion: v1
// kind: Pod
// metadata:
// labels:
// app: milvus
// componet: test
// spec:
// containers:
// - name: milvus-testframework
// image: registry.zilliz.com/milvus/milvus-test:v0.2
// command:
// - cat
// tty: true
// volumeMounts:
// - name: kubeconf
// mountPath: /root/.kube/
// readOnly: true
// volumes:
// - name: kubeconf
// secret:
// secretName: test-cluster-config
// """
// }
// }
// stages {
// stage("Deploy to Dev") {
// steps {
// gitlabCommitStatus(name: 'Deloy to Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_deploy2dev.groovy"
// }
// }
// }
// }
// }
// stage("Dev Test") {
// steps {
// gitlabCommitStatus(name: 'Deloy Test') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_dev_test.groovy"
// load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_cluster_test_out.groovy"
// }
// }
// }
// }
// }
// stage ("Cleanup Dev") {
// steps {
// gitlabCommitStatus(name: 'Cleanup Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// }
// }
// }
// post {
// always {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// success {
// script {
// echo "Milvus Cluster CI/CD success !"
// }
// }
// aborted {
// script {
// echo "Milvus Cluster CI/CD aborted !"
// }
// }
// failure {
// script {
// echo "Milvus Cluster CI/CD failure !"
// }
// }
// }
// }
}
}
}
}
}
post {
always {
script {
if (env.gitlabAfter != null) {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
}
}
success {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'success'
echo "Milvus CI/CD success !"
}
}
aborted {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'canceled'
echo "Milvus CI/CD aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'failed'
echo "Milvus CI/CD failure !"
}
}
}
}
pipeline {
agent none
options {
timestamps()
}
environment {
PROJECT_NAME = "milvus"
LOWER_BUILD_TYPE = BUILD_TYPE.toLowerCase()
SEMVER = "${env.gitlabSourceBranch == null ? params.ENGINE_BRANCH.substring(params.ENGINE_BRANCH.lastIndexOf('/') + 1) : env.gitlabSourceBranch}"
GITLAB_AFTER_COMMIT = "${env.gitlabAfter == null ? null : env.gitlabAfter}"
SUFFIX_VERSION_NAME = "${env.gitlabAfter == null ? null : env.gitlabAfter.substring(0, 6)}"
DOCKER_VERSION_STR = "${env.gitlabAfter == null ? '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, \"yyyyMMdd\"}' : '${SEMVER}-${LOWER_BUILD_TYPE}-${SUFFIX_VERSION_NAME}'}"
}
stages {
stage("Ubuntu 16.04") {
environment {
PACKAGE_VERSION = VersionNumber([
versionNumberString : '${SEMVER}-${LOWER_BUILD_TYPE}-${BUILD_DATE_FORMATTED, "yyyyMMdd"}'
]);
DOCKER_VERSION = VersionNumber([
versionNumberString : '${DOCKER_VERSION_STR}'
]);
}
stages {
stage("Run Build") {
agent {
kubernetes {
cloud 'build-kubernetes'
label 'build'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: milvus-build-env
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.13
command:
- cat
tty: true
resources:
limits:
memory: "28Gi"
cpu: "10.0"
nvidia.com/gpu: 1
requests:
memory: "14Gi"
cpu: "5.0"
"""
}
}
stages {
stage('Build') {
steps {
gitlabCommitStatus(name: 'Build') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/milvus_build.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/packaged_milvus.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Build', state: 'canceled'
echo "Milvus Build aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Build', state: 'failed'
echo "Milvus Build failure !"
}
}
}
}
stage("Publish docker and helm") {
agent {
kubernetes {
label 'publish'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Publish Docker') {
steps {
gitlabCommitStatus(name: 'Publish Docker') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/nightly_publish_docker.groovy"
}
}
}
}
}
post {
aborted {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'canceled'
echo "Milvus Publish Docker aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'Publish Docker', state: 'failed'
echo "Milvus Publish Docker failure !"
}
}
}
}
stage("Deploy to Development") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: test-cluster-config
"""
}
}
stages {
stage("Deploy to Dev") {
steps {
gitlabCommitStatus(name: 'Deloy to Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2dev.groovy"
}
}
}
}
}
stage("Dev Test") {
steps {
gitlabCommitStatus(name: 'Deloy Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/dev_test_all.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Dev") {
steps {
gitlabCommitStatus(name: 'Cleanup Dev') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_dev.groovy"
}
}
}
success {
script {
echo "Milvus Deploy to Dev Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Deploy to Dev Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Deploy to Dev Single Node CI/CD failure !"
}
}
}
}
// stage("Cluster") {
// agent {
// kubernetes {
// label 'dev-test'
// defaultContainer 'jnlp'
// yaml """
// apiVersion: v1
// kind: Pod
// metadata:
// labels:
// app: milvus
// componet: test
// spec:
// containers:
// - name: milvus-testframework
// image: registry.zilliz.com/milvus/milvus-test:v0.2
// command:
// - cat
// tty: true
// volumeMounts:
// - name: kubeconf
// mountPath: /root/.kube/
// readOnly: true
// volumes:
// - name: kubeconf
// secret:
// secretName: test-cluster-config
// """
// }
// }
// stages {
// stage("Deploy to Dev") {
// steps {
// gitlabCommitStatus(name: 'Deloy to Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_deploy2dev.groovy"
// }
// }
// }
// }
// }
// stage("Dev Test") {
// steps {
// gitlabCommitStatus(name: 'Deloy Test') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_dev_test.groovy"
// load "${env.WORKSPACE}/ci/jenkinsfile/upload_dev_cluster_test_out.groovy"
// }
// }
// }
// }
// }
// stage ("Cleanup Dev") {
// steps {
// gitlabCommitStatus(name: 'Cleanup Dev') {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// }
// }
// }
// post {
// always {
// container('milvus-testframework') {
// script {
// load "${env.WORKSPACE}/ci/jenkinsfile/cluster_cleanup_dev.groovy"
// }
// }
// }
// success {
// script {
// echo "Milvus Deploy to Dev Cluster CI/CD success !"
// }
// }
// aborted {
// script {
// echo "Milvus Deploy to Dev Cluster CI/CD aborted !"
// }
// }
// failure {
// script {
// echo "Milvus Deploy to Dev Cluster CI/CD failure !"
// }
// }
// }
// }
}
}
stage("Deploy to Staging") {
parallel {
stage("Single Node") {
agent {
kubernetes {
label 'dev-test'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: test
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.2
command:
- cat
tty: true
volumeMounts:
- name: kubeconf
mountPath: /root/.kube/
readOnly: true
volumes:
- name: kubeconf
secret:
secretName: aks-gpu-cluster-config
"""
}
}
stages {
stage("Deploy to Staging") {
steps {
gitlabCommitStatus(name: 'Deloy to Staging') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/deploy2staging.groovy"
}
}
}
}
}
stage("Staging Test") {
steps {
gitlabCommitStatus(name: 'Staging Test') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/staging_test.groovy"
load "${env.WORKSPACE}/ci/jenkinsfile/upload_staging_test_out.groovy"
}
}
}
}
}
stage ("Cleanup Staging") {
steps {
gitlabCommitStatus(name: 'Cleanup Staging') {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_staging.groovy"
}
}
}
}
}
}
post {
always {
container('milvus-testframework') {
script {
load "${env.WORKSPACE}/ci/jenkinsfile/cleanup_staging.groovy"
}
}
}
success {
script {
echo "Milvus Deploy to Staging Single Node CI/CD success !"
}
}
aborted {
script {
echo "Milvus Deploy to Staging Single Node CI/CD aborted !"
}
}
failure {
script {
echo "Milvus Deploy to Staging Single Node CI/CD failure !"
}
}
}
}
}
}
}
}
}
post {
always {
script {
if (!currentBuild.resultIsBetterOrEqualTo('SUCCESS')) {
// Send an email only if the build status has changed from green/unstable to red
emailext subject: '$DEFAULT_SUBJECT',
body: '$DEFAULT_CONTENT',
recipientProviders: [
[$class: 'DevelopersRecipientProvider'],
[$class: 'RequesterRecipientProvider']
],
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
}
}
success {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'success'
echo "Milvus CI/CD success !"
}
}
aborted {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'canceled'
echo "Milvus CI/CD aborted !"
}
}
failure {
script {
updateGitlabCommitStatus name: 'CI/CD', state: 'failed'
echo "Milvus CI/CD failure !"
}
}
}
}
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: build-env
spec:
containers:
- name: milvus-build-env
image: registry.zilliz.com/milvus/milvus-build-env:v0.9
command:
- cat
tty: true
apiVersion: v1
kind: Pod
metadata:
labels:
app: milvus
componet: testframework
spec:
containers:
- name: milvus-testframework
image: registry.zilliz.com/milvus/milvus-test:v0.1
command:
- cat
tty: true
apiVersion: v1
kind: Pod
metadata:
labels:
app: publish
componet: docker
spec:
containers:
- name: publish-docker
image: registry.zilliz.com/library/zilliz_docker:v1.0.0
securityContext:
privileged: true
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
......@@ -74,7 +74,7 @@ function(ExternalProject_Use_Cache project_name package_file install_path)
${CMAKE_COMMAND} -E echo
"Extracting ${package_file} to ${install_path}"
COMMAND
${CMAKE_COMMAND} -E tar xzvf ${package_file} ${install_path}
${CMAKE_COMMAND} -E tar xzf ${package_file} ${install_path}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
......
......@@ -33,7 +33,7 @@ FaissBaseIndex::SerializeImpl() {
try {
faiss::Index* index = index_.get();
SealImpl();
// SealImpl();
MemoryIOWriter writer;
faiss::write_index(index, &writer);
......@@ -60,6 +60,8 @@ FaissBaseIndex::LoadImpl(const BinarySet& index_binary) {
faiss::Index* index = faiss::read_index(&reader);
index_.reset(index);
SealImpl();
}
void
......
......@@ -86,9 +86,6 @@ GPUIVF::SerializeImpl() {
faiss::Index* index = index_.get();
faiss::Index* host_index = faiss::gpu::index_gpu_to_cpu(index);
// TODO(linxj): support seal
// SealImpl();
faiss::write_index(host_index, &writer);
delete host_index;
}
......
......@@ -97,7 +97,6 @@ IVF::Serialize() {
}
std::lock_guard<std::mutex> lk(mutex_);
Seal();
return SerializeImpl();
}
......
......@@ -93,6 +93,15 @@ ClientTest::Test(const std::string& address, const std::string& port) {
std::cout << "CreatePartition function call status: " << stat.message() << std::endl;
milvus_sdk::Utils::PrintPartitionParam(partition_param);
}
// show partitions
milvus::PartitionList partition_array;
stat = conn->ShowPartitions(TABLE_NAME, partition_array);
std::cout << partition_array.size() << " partitions created:" << std::endl;
for (auto& partition : partition_array) {
std::cout << "\t" << partition.partition_name << "\t tag = " << partition.partition_tag << std::endl;
}
}
{ // insert vectors
......
......@@ -4,6 +4,6 @@ log_format = [%(asctime)s-%(levelname)s-%(name)s]: %(message)s (%(filename)s:%(l
log_cli = true
log_level = 20
timeout = 300
timeout = 600
level = 1
\ No newline at end of file
......@@ -22,4 +22,4 @@ wcwidth==0.1.7
wrapt==1.11.1
zipp==0.5.1
scikit-learn>=0.19.1
pymilvus-test>=0.2.0
\ No newline at end of file
pymilvus-test>=0.2.0
......@@ -17,7 +17,6 @@ allure-pytest==2.7.0
pytest-print==0.1.2
pytest-level==0.1.1
six==1.12.0
thrift==0.11.0
typed-ast==1.3.5
wcwidth==0.1.7
wrapt==1.11.1
......
......@@ -15,7 +15,7 @@ table_id = "test_add"
ADD_TIMEOUT = 60
nprobe = 1
epsilon = 0.0001
tag = "1970-01-01"
class TestAddBase:
"""
......@@ -186,6 +186,7 @@ class TestAddBase:
expected: status ok
'''
index_param = get_simple_index_params
logging.getLogger().info(index_param)
vector = gen_single_vector(dim)
status, ids = connect.add_vectors(table, vector)
status = connect.create_index(table, index_param)
......@@ -439,6 +440,80 @@ class TestAddBase:
assert status.OK()
assert len(ids) == nq
@pytest.mark.timeout(ADD_TIMEOUT)
def test_add_vectors_tag(self, connect, table):
'''
target: test add vectors in table created before
method: create table and add vectors in it, with the partition_tag param
expected: the table row count equals to nq
'''
nq = 5
partition_name = gen_unique_str()
vectors = gen_vectors(nq, dim)
status = connect.create_partition(table, partition_name, tag)
status, ids = connect.add_vectors(table, vectors, partition_tag=tag)
assert status.OK()
assert len(ids) == nq
@pytest.mark.timeout(ADD_TIMEOUT)
def test_add_vectors_tag_A(self, connect, table):
'''
target: test add vectors in table created before
method: create partition and add vectors in it
expected: the table row count equals to nq
'''
nq = 5
partition_name = gen_unique_str()
vectors = gen_vectors(nq, dim)
status = connect.create_partition(table, partition_name, tag)
status, ids = connect.add_vectors(partition_name, vectors)
assert status.OK()
assert len(ids) == nq
@pytest.mark.timeout(ADD_TIMEOUT)
def test_add_vectors_tag_not_existed(self, connect, table):
'''
target: test add vectors in table created before
method: create table and add vectors in it, with the not existed partition_tag param
expected: status not ok
'''
nq = 5
vectors = gen_vectors(nq, dim)
status, ids = connect.add_vectors(table, vectors, partition_tag=tag)
assert not status.OK()
@pytest.mark.timeout(ADD_TIMEOUT)
def test_add_vectors_tag_not_existed_A(self, connect, table):
'''
target: test add vectors in table created before
method: create partition, add vectors with the not existed partition_tag param
expected: status not ok
'''
nq = 5
vectors = gen_vectors(nq, dim)
new_tag = "new_tag"
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status, ids = connect.add_vectors(table, vectors, partition_tag=new_tag)
assert not status.OK()
@pytest.mark.timeout(ADD_TIMEOUT)
def test_add_vectors_tag_existed(self, connect, table):
'''
target: test add vectors in table created before
method: create table and add vectors in it repeatly, with the partition_tag param
expected: the table row count equals to nq
'''
nq = 5
partition_name = gen_unique_str()
vectors = gen_vectors(nq, dim)
status = connect.create_partition(table, partition_name, tag)
status, ids = connect.add_vectors(table, vectors, partition_tag=tag)
for i in range(5):
status, ids = connect.add_vectors(table, vectors, partition_tag=tag)
assert status.OK()
assert len(ids) == nq
@pytest.mark.level(2)
def test_add_vectors_without_connect(self, dis_connect, table):
'''
......@@ -1198,7 +1273,8 @@ class TestAddAdvance:
assert len(ids) == nb
assert status.OK()
class TestAddTableNameInvalid(object):
class TestNameInvalid(object):
"""
Test adding vectors with invalid table names
"""
......@@ -1209,13 +1285,27 @@ class TestAddTableNameInvalid(object):
def get_table_name(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_table_names()
)
def get_tag_name(self, request):
yield request.param
@pytest.mark.level(2)
def test_add_vectors_with_invalid_tablename(self, connect, get_table_name):
def test_add_vectors_with_invalid_table_name(self, connect, get_table_name):
table_name = get_table_name
vectors = gen_vectors(1, dim)
status, result = connect.add_vectors(table_name, vectors)
assert not status.OK()
@pytest.mark.level(2)
def test_add_vectors_with_invalid_tag_name(self, connect, get_tag_name):
tag_name = get_tag_name
vectors = gen_vectors(1, dim)
status, result = connect.add_vectors(table_name, vectors, partition_tag=tag_name)
assert not status.OK()
class TestAddTableVectorsInvalid(object):
single_vector = gen_single_vector(dim)
......
......@@ -149,15 +149,14 @@ class TestConnect:
milvus.connect(uri=uri_value, timeout=1)
assert not milvus.connected()
# TODO: enable
def _test_connect_with_multiprocess(self, args):
def test_connect_with_multiprocess(self, args):
'''
target: test uri connect with multiprocess
method: set correct uri, test with multiprocessing connecting
expected: all connection is connected
'''
uri_value = "tcp://%s:%s" % (args["ip"], args["port"])
process_num = 4
process_num = 10
processes = []
def connect(milvus):
......@@ -248,7 +247,7 @@ class TestConnect:
expected: connect raise an exception and connected is false
'''
milvus = Milvus()
uri_value = "tcp://%s:19540" % args["ip"]
uri_value = "tcp://%s:39540" % args["ip"]
with pytest.raises(Exception) as e:
milvus.connect(host=args["ip"], port="", uri=uri_value)
......@@ -264,6 +263,7 @@ class TestConnect:
milvus.connect(host="", port=args["port"], uri=uri_value, timeout=1)
assert not milvus.connected()
# Disable, (issue: https://github.com/milvus-io/milvus/issues/288)
def test_connect_param_priority_both_hostip_uri(self, args):
'''
target: both host_ip_port / uri are both given, and not null, use the uri params
......@@ -273,8 +273,9 @@ class TestConnect:
milvus = Milvus()
uri_value = "tcp://%s:%s" % (args["ip"], args["port"])
with pytest.raises(Exception) as e:
milvus.connect(host=args["ip"], port=19540, uri=uri_value, timeout=1)
assert not milvus.connected()
res = milvus.connect(host=args["ip"], port=39540, uri=uri_value, timeout=1)
logging.getLogger().info(res)
# assert not milvus.connected()
def _test_add_vector_and_disconnect_concurrently(self):
'''
......
......@@ -20,6 +20,7 @@ vectors = sklearn.preprocessing.normalize(vectors, axis=1, norm='l2')
vectors = vectors.tolist()
BUILD_TIMEOUT = 60
nprobe = 1
tag = "1970-01-01"
class TestIndexBase:
......@@ -62,6 +63,21 @@ class TestIndexBase:
status = connect.create_index(table, index_params)
assert status.OK()
@pytest.mark.timeout(BUILD_TIMEOUT)
def test_create_index_partition(self, connect, table, get_index_params):
'''
target: test create index interface
method: create table, create partition, and add vectors in it, create index
expected: return code equals to 0, and search success
'''
partition_name = gen_unique_str()
index_params = get_index_params
logging.getLogger().info(index_params)
status = connect.create_partition(table, partition_name, tag)
status, ids = connect.add_vectors(table, vectors, partition_tag=tag)
status = connect.create_index(table, index_params)
assert status.OK()
@pytest.mark.level(2)
def test_create_index_without_connect(self, dis_connect, table):
'''
......@@ -555,6 +571,21 @@ class TestIndexIP:
status = connect.create_index(ip_table, index_params)
assert status.OK()
@pytest.mark.timeout(BUILD_TIMEOUT)
def test_create_index_partition(self, connect, ip_table, get_index_params):
'''
target: test create index interface
method: create table, create partition, and add vectors in it, create index
expected: return code equals to 0, and search success
'''
partition_name = gen_unique_str()
index_params = get_index_params
logging.getLogger().info(index_params)
status = connect.create_partition(ip_table, partition_name, tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status = connect.create_index(partition_name, index_params)
assert status.OK()
@pytest.mark.level(2)
def test_create_index_without_connect(self, dis_connect, ip_table):
'''
......@@ -583,9 +614,9 @@ class TestIndexIP:
query_vecs = [vectors[0], vectors[1], vectors[2]]
top_k = 5
status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vecs)
logging.getLogger().info(result)
assert status.OK()
assert len(result) == len(query_vecs)
# logging.getLogger().info(result)
# TODO: enable
@pytest.mark.timeout(BUILD_TIMEOUT)
......@@ -743,13 +774,13 @@ class TestIndexIP:
******************************************************************
"""
def test_describe_index(self, connect, ip_table, get_index_params):
def test_describe_index(self, connect, ip_table, get_simple_index_params):
'''
target: test describe index interface
method: create table and add vectors in it, create index, call describe index
expected: return code 0, and index instructure
'''
index_params = get_index_params
index_params = get_simple_index_params
logging.getLogger().info(index_params)
status, ids = connect.add_vectors(ip_table, vectors)
status = connect.create_index(ip_table, index_params)
......@@ -759,6 +790,80 @@ class TestIndexIP:
assert result._table_name == ip_table
assert result._index_type == index_params["index_type"]
def test_describe_index_partition(self, connect, ip_table, get_simple_index_params):
'''
target: test describe index interface
method: create table, create partition and add vectors in it, create index, call describe index
expected: return code 0, and index instructure
'''
partition_name = gen_unique_str()
index_params = get_simple_index_params
logging.getLogger().info(index_params)
status = connect.create_partition(ip_table, partition_name, tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status = connect.create_index(ip_table, index_params)
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == index_params["nlist"]
assert result._table_name == ip_table
assert result._index_type == index_params["index_type"]
status, result = connect.describe_index(partition_name)
logging.getLogger().info(result)
assert result._nlist == index_params["nlist"]
assert result._table_name == partition_name
assert result._index_type == index_params["index_type"]
def test_describe_index_partition_A(self, connect, ip_table, get_simple_index_params):
'''
target: test describe index interface
method: create table, create partition and add vectors in it, create index on partition, call describe index
expected: return code 0, and index instructure
'''
partition_name = gen_unique_str()
index_params = get_simple_index_params
logging.getLogger().info(index_params)
status = connect.create_partition(ip_table, partition_name, tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status = connect.create_index(partition_name, index_params)
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == ip_table
assert result._index_type == IndexType.FLAT
status, result = connect.describe_index(partition_name)
logging.getLogger().info(result)
assert result._nlist == index_params["nlist"]
assert result._table_name == partition_name
assert result._index_type == index_params["index_type"]
def test_describe_index_partition_B(self, connect, ip_table, get_simple_index_params):
'''
target: test describe index interface
method: create table, create partitions and add vectors in it, create index on partitions, call describe index
expected: return code 0, and index instructure
'''
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
new_tag = "new_tag"
index_params = get_simple_index_params
logging.getLogger().info(index_params)
status = connect.create_partition(ip_table, partition_name, tag)
status = connect.create_partition(ip_table, new_partition_name, new_tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=new_tag)
status = connect.create_index(partition_name, index_params)
status = connect.create_index(new_partition_name, index_params)
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == ip_table
assert result._index_type == IndexType.FLAT
status, result = connect.describe_index(new_partition_name)
logging.getLogger().info(result)
assert result._nlist == index_params["nlist"]
assert result._table_name == new_partition_name
assert result._index_type == index_params["index_type"]
def test_describe_and_drop_index_multi_tables(self, connect, get_simple_index_params):
'''
target: test create, describe and drop index interface with multiple tables of IP
......@@ -849,6 +954,111 @@ class TestIndexIP:
assert result._table_name == ip_table
assert result._index_type == IndexType.FLAT
def test_drop_index_partition(self, connect, ip_table, get_simple_index_params):
'''
target: test drop index interface
method: create table, create partition and add vectors in it, create index on table, call drop table index
expected: return code 0, and default index param
'''
partition_name = gen_unique_str()
index_params = get_simple_index_params
status = connect.create_partition(ip_table, partition_name, tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status = connect.create_index(ip_table, index_params)
assert status.OK()
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
status = connect.drop_index(ip_table)
assert status.OK()
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == ip_table
assert result._index_type == IndexType.FLAT
def test_drop_index_partition_A(self, connect, ip_table, get_simple_index_params):
'''
target: test drop index interface
method: create table, create partition and add vectors in it, create index on partition, call drop table index
expected: return code 0, and default index param
'''
partition_name = gen_unique_str()
index_params = get_simple_index_params
status = connect.create_partition(ip_table, partition_name, tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status = connect.create_index(partition_name, index_params)
assert status.OK()
status = connect.drop_index(ip_table)
assert status.OK()
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == ip_table
assert result._index_type == IndexType.FLAT
status, result = connect.describe_index(partition_name)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == partition_name
assert result._index_type == IndexType.FLAT
def test_drop_index_partition_B(self, connect, ip_table, get_simple_index_params):
'''
target: test drop index interface
method: create table, create partition and add vectors in it, create index on partition, call drop partition index
expected: return code 0, and default index param
'''
partition_name = gen_unique_str()
index_params = get_simple_index_params
status = connect.create_partition(ip_table, partition_name, tag)
status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag)
status = connect.create_index(partition_name, index_params)
assert status.OK()
status = connect.drop_index(partition_name)
assert status.OK()
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == ip_table
assert result._index_type == IndexType.FLAT
status, result = connect.describe_index(partition_name)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == partition_name
assert result._index_type == IndexType.FLAT
def test_drop_index_partition_C(self, connect, ip_table, get_simple_index_params):
'''
target: test drop index interface
method: create table, create partitions and add vectors in it, create index on partitions, call drop partition index
expected: return code 0, and default index param
'''
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
new_tag = "new_tag"
index_params = get_simple_index_params
status = connect.create_partition(ip_table, partition_name, tag)
status = connect.create_partition(ip_table, new_partition_name, new_tag)
status, ids = connect.add_vectors(ip_table, vectors)
status = connect.create_index(ip_table, index_params)
assert status.OK()
status = connect.drop_index(new_partition_name)
assert status.OK()
status, result = connect.describe_index(new_partition_name)
logging.getLogger().info(result)
assert result._nlist == 16384
assert result._table_name == new_partition_name
assert result._index_type == IndexType.FLAT
status, result = connect.describe_index(partition_name)
logging.getLogger().info(result)
assert result._nlist == index_params["nlist"]
assert result._table_name == partition_name
assert result._index_type == index_params["index_type"]
status, result = connect.describe_index(ip_table)
logging.getLogger().info(result)
assert result._nlist == index_params["nlist"]
assert result._table_name == ip_table
assert result._index_type == index_params["index_type"]
def test_drop_index_repeatly(self, connect, ip_table, get_simple_index_params):
'''
target: test drop index repeatly
......
......@@ -25,9 +25,8 @@ index_params = {'index_type': IndexType.IVFLAT, 'nlist': 16384}
class TestMixBase:
# TODO: enable
def test_search_during_createIndex(self, args):
loops = 100000
loops = 10000
table = gen_unique_str()
query_vecs = [vectors[0], vectors[1]]
uri = "tcp://%s:%s" % (args["ip"], args["port"])
......
import time
import random
import pdb
import threading
import logging
from multiprocessing import Pool, Process
import pytest
from milvus import Milvus, IndexType, MetricType
from utils import *
dim = 128
index_file_size = 10
table_id = "test_add"
ADD_TIMEOUT = 60
nprobe = 1
epsilon = 0.0001
tag = "1970-01-01"
class TestCreateBase:
"""
******************************************************************
The following cases are used to test `create_partition` function
******************************************************************
"""
def test_create_partition(self, connect, table):
'''
target: test create partition, check status returned
method: call function: create_partition
expected: status ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
def test_create_partition_repeat(self, connect, table):
'''
target: test create partition, check status returned
method: call function: create_partition
expected: status ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(table, partition_name, tag)
assert not status.OK()
def test_create_partition_recursively(self, connect, table):
'''
target: test create partition, and create partition in parent partition, check status returned
method: call function: create_partition
expected: status not ok
'''
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
new_tag = "new_tag"
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(partition_name, new_partition_name, new_tag)
assert not status.OK()
def test_create_partition_table_not_existed(self, connect):
'''
target: test create partition, its owner table name not existed in db, check status returned
method: call function: create_partition
expected: status not ok
'''
table_name = gen_unique_str()
partition_name = gen_unique_str()
status = connect.create_partition(table_name, partition_name, tag)
assert not status.OK()
def test_create_partition_partition_name_existed(self, connect, table):
'''
target: test create partition, and create the same partition again, check status returned
method: call function: create_partition
expected: status not ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
tag_new = "tag_new"
status = connect.create_partition(table, partition_name, tag_new)
assert not status.OK()
def test_create_partition_partition_name_equals_table(self, connect, table):
'''
target: test create partition, the partition equals to table, check status returned
method: call function: create_partition
expected: status not ok
'''
status = connect.create_partition(table, table, tag)
assert not status.OK()
def test_create_partition_partition_name_None(self, connect, table):
'''
target: test create partition, partition name set None, check status returned
method: call function: create_partition
expected: status not ok
'''
partition_name = None
status = connect.create_partition(table, partition_name, tag)
assert not status.OK()
def test_create_partition_tag_name_None(self, connect, table):
'''
target: test create partition, tag name set None, check status returned
method: call function: create_partition
expected: status ok
'''
tag_name = None
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag_name)
assert not status.OK()
def test_create_different_partition_tag_name_existed(self, connect, table):
'''
target: test create partition, and create the same partition tag again, check status returned
method: call function: create_partition with the same tag name
expected: status not ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
new_partition_name = gen_unique_str()
status = connect.create_partition(table, new_partition_name, tag)
assert not status.OK()
def test_create_partition_add_vectors(self, connect, table):
'''
target: test create partition, and insert vectors, check status returned
method: call function: create_partition
expected: status ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
nq = 100
vectors = gen_vectors(nq, dim)
ids = [i for i in range(nq)]
status, ids = connect.insert(table, vectors, ids)
assert status.OK()
def test_create_partition_insert_with_tag(self, connect, table):
'''
target: test create partition, and insert vectors, check status returned
method: call function: create_partition
expected: status ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
nq = 100
vectors = gen_vectors(nq, dim)
ids = [i for i in range(nq)]
status, ids = connect.insert(table, vectors, ids, partition_tag=tag)
assert status.OK()
def test_create_partition_insert_with_tag_not_existed(self, connect, table):
'''
target: test create partition, and insert vectors, check status returned
method: call function: create_partition
expected: status not ok
'''
tag_new = "tag_new"
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
nq = 100
vectors = gen_vectors(nq, dim)
ids = [i for i in range(nq)]
status, ids = connect.insert(table, vectors, ids, partition_tag=tag_new)
assert not status.OK()
def test_create_partition_insert_same_tags(self, connect, table):
'''
target: test create partition, and insert vectors, check status returned
method: call function: create_partition
expected: status ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
nq = 100
vectors = gen_vectors(nq, dim)
ids = [i for i in range(nq)]
status, ids = connect.insert(table, vectors, ids, partition_tag=tag)
ids = [(i+100) for i in range(nq)]
status, ids = connect.insert(table, vectors, ids, partition_tag=tag)
assert status.OK()
time.sleep(1)
status, res = connect.get_table_row_count(partition_name)
assert res == nq * 2
def test_create_partition_insert_same_tags_two_tables(self, connect, table):
'''
target: test create two partitions, and insert vectors with the same tag to each table, check status returned
method: call function: create_partition
expected: status ok, table length is correct
'''
partition_name = gen_unique_str()
table_new = gen_unique_str()
new_partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
param = {'table_name': table_new,
'dimension': dim,
'index_file_size': index_file_size,
'metric_type': MetricType.L2}
status = connect.create_table(param)
status = connect.create_partition(table_new, new_partition_name, tag)
assert status.OK()
nq = 100
vectors = gen_vectors(nq, dim)
ids = [i for i in range(nq)]
status, ids = connect.insert(table, vectors, ids, partition_tag=tag)
ids = [(i+100) for i in range(nq)]
status, ids = connect.insert(table_new, vectors, ids, partition_tag=tag)
assert status.OK()
time.sleep(1)
status, res = connect.get_table_row_count(new_partition_name)
assert res == nq
class TestShowBase:
"""
******************************************************************
The following cases are used to test `show_partitions` function
******************************************************************
"""
def test_show_partitions(self, connect, table):
'''
target: test show partitions, check status and partitions returned
method: create partition first, then call function: show_partitions
expected: status ok, partition correct
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status, res = connect.show_partitions(table)
assert status.OK()
def test_show_partitions_no_partition(self, connect, table):
'''
target: test show partitions with table name, check status and partitions returned
method: call function: show_partitions
expected: status ok, partitions correct
'''
partition_name = gen_unique_str()
status, res = connect.show_partitions(table)
assert status.OK()
def test_show_partitions_no_partition_recursive(self, connect, table):
'''
target: test show partitions with partition name, check status and partitions returned
method: call function: show_partitions
expected: status ok, no partitions
'''
partition_name = gen_unique_str()
status, res = connect.show_partitions(partition_name)
assert status.OK()
assert len(res) == 0
def test_show_multi_partitions(self, connect, table):
'''
target: test show partitions, check status and partitions returned
method: create partitions first, then call function: show_partitions
expected: status ok, partitions correct
'''
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(table, new_partition_name, tag)
status, res = connect.show_partitions(table)
assert status.OK()
class TestDropBase:
"""
******************************************************************
The following cases are used to test `drop_partition` function
******************************************************************
"""
def test_drop_partition(self, connect, table):
'''
target: test drop partition, check status and partition if existed
method: create partitions first, then call function: drop_partition
expected: status ok, no partitions in db
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.drop_partition(table, tag)
assert status.OK()
# check if the partition existed
status, res = connect.show_partitions(table)
assert partition_name not in res
def test_drop_partition_tag_not_existed(self, connect, table):
'''
target: test drop partition, but tag not existed
method: create partitions first, then call function: drop_partition
expected: status not ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
new_tag = "new_tag"
status = connect.drop_partition(table, new_tag)
assert not status.OK()
def test_drop_partition_tag_not_existed_A(self, connect, table):
'''
target: test drop partition, but table not existed
method: create partitions first, then call function: drop_partition
expected: status not ok
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
new_table = gen_unique_str()
status = connect.drop_partition(new_table, tag)
assert not status.OK()
def test_drop_partition_repeatedly(self, connect, table):
'''
target: test drop partition twice, check status and partition if existed
method: create partitions first, then call function: drop_partition
expected: status not ok, no partitions in db
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.drop_partition(table, tag)
status = connect.drop_partition(table, tag)
time.sleep(2)
assert not status.OK()
status, res = connect.show_partitions(table)
assert partition_name not in res
def test_drop_partition_create(self, connect, table):
'''
target: test drop partition, and create again, check status
method: create partitions first, then call function: drop_partition, create_partition
expected: status not ok, partition in db
'''
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.drop_partition(table, tag)
time.sleep(2)
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
status, res = connect.show_partitions(table)
assert partition_name == res[0].partition_name
class TestNameInvalid(object):
@pytest.fixture(
scope="function",
params=gen_invalid_table_names()
)
def get_partition_name(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_table_names()
)
def get_tag_name(self, request):
yield request.param
@pytest.fixture(
scope="function",
params=gen_invalid_table_names()
)
def get_table_name(self, request):
yield request.param
def test_create_partition_with_invalid_partition_name(self, connect, table, get_partition_name):
'''
target: test create partition, with invalid partition name, check status returned
method: call function: create_partition
expected: status not ok
'''
partition_name = get_partition_name
status = connect.create_partition(table, partition_name, tag)
assert not status.OK()
def test_create_partition_with_invalid_tag_name(self, connect, table):
'''
target: test create partition, with invalid partition name, check status returned
method: call function: create_partition
expected: status not ok
'''
tag_name = " "
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag_name)
assert not status.OK()
def test_drop_partition_with_invalid_table_name(self, connect, table, get_table_name):
'''
target: test drop partition, with invalid table name, check status returned
method: call function: drop_partition
expected: status not ok
'''
table_name = get_table_name
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.drop_partition(table_name, tag)
assert not status.OK()
def test_drop_partition_with_invalid_tag_name(self, connect, table, get_tag_name):
'''
target: test drop partition, with invalid tag name, check status returned
method: call function: drop_partition
expected: status not ok
'''
tag_name = get_tag_name
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.drop_partition(table, tag_name)
assert not status.OK()
def test_show_partitions_with_invalid_table_name(self, connect, table, get_table_name):
'''
target: test show partitions, with invalid table name, check status returned
method: call function: show_partitions
expected: status not ok
'''
table_name = get_table_name
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status, res = connect.show_partitions(table_name)
assert not status.OK()
\ No newline at end of file
......@@ -16,8 +16,9 @@ add_interval_time = 2
vectors = gen_vectors(100, dim)
# vectors /= numpy.linalg.norm(vectors)
# vectors = vectors.tolist()
nrpobe = 1
nprobe = 1
epsilon = 0.001
tag = "1970-01-01"
class TestSearchBase:
......@@ -49,6 +50,15 @@ class TestSearchBase:
pytest.skip("sq8h not support in open source")
return request.param
@pytest.fixture(
scope="function",
params=gen_simple_index_params()
)
def get_simple_index_params(self, request, args):
if "internal" not in args:
if request.param["index_type"] == IndexType.IVF_SQ8H:
pytest.skip("sq8h not support in open source")
return request.param
"""
generate top-k params
"""
......@@ -70,7 +80,7 @@ class TestSearchBase:
query_vec = [vectors[0]]
top_k = get_top_k
nprobe = 1
status, result = connect.search_vectors(table, top_k, nrpobe, query_vec)
status, result = connect.search_vectors(table, top_k, nprobe, query_vec)
if top_k <= 2048:
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
......@@ -85,7 +95,6 @@ class TestSearchBase:
method: search with the given vectors, check the result
expected: search status ok, and the length of the result is top_k
'''
index_params = get_index_params
logging.getLogger().info(index_params)
vectors, ids = self.init_data(connect, table)
......@@ -93,7 +102,7 @@ class TestSearchBase:
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(table, top_k, nrpobe, query_vec)
status, result = connect.search_vectors(table, top_k, nprobe, query_vec)
logging.getLogger().info(result)
if top_k <= 1024:
assert status.OK()
......@@ -103,6 +112,160 @@ class TestSearchBase:
else:
assert not status.OK()
def test_search_l2_index_params_partition(self, connect, table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: add vectors into table, search with the given vectors, check the result
expected: search status ok, and the length of the result is top_k, search table with partition tag return empty
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
vectors, ids = self.init_data(connect, table)
status = connect.create_index(table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(table, top_k, nprobe, query_vec)
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert result[0][0].distance <= epsilon
status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result) == 0
def test_search_l2_index_params_partition_A(self, connect, table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search partition with the given vectors, check the result
expected: search status ok, and the length of the result is 0
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
vectors, ids = self.init_data(connect, table)
status = connect.create_index(table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(partition_name, top_k, nprobe, query_vec, partition_tags=[tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result) == 0
def test_search_l2_index_params_partition_B(self, connect, table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search with the given vectors, check the result
expected: search status ok, and the length of the result is top_k
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
vectors, ids = self.init_data(connect, partition_name)
status = connect.create_index(table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(table, top_k, nprobe, query_vec)
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert result[0][0].distance <= epsilon
status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert result[0][0].distance <= epsilon
status, result = connect.search_vectors(partition_name, top_k, nprobe, query_vec, partition_tags=[tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result) == 0
def test_search_l2_index_params_partition_C(self, connect, table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search with the given vectors and tags (one of the tags not existed in table), check the result
expected: search status ok, and the length of the result is top_k
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
vectors, ids = self.init_data(connect, partition_name)
status = connect.create_index(table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag, "new_tag"])
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert result[0][0].distance <= epsilon
def test_search_l2_index_params_partition_D(self, connect, table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search with the given vectors and tag (tag name not existed in table), check the result
expected: search status ok, and the length of the result is top_k
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
vectors, ids = self.init_data(connect, partition_name)
status = connect.create_index(table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=["new_tag"])
logging.getLogger().info(result)
assert status.OK()
assert len(result) == 0
def test_search_l2_index_params_partition_E(self, connect, table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search table with the given vectors and tags, check the result
expected: search status ok, and the length of the result is top_k
'''
new_tag = "new_tag"
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(table, new_partition_name, new_tag)
vectors, ids = self.init_data(connect, partition_name)
new_vectors, new_ids = self.init_data(connect, new_partition_name, nb=1000)
status = connect.create_index(table, index_params)
query_vec = [vectors[0], new_vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag, new_tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert check_result(result[1], new_ids[0])
assert result[0][0].distance <= epsilon
assert result[1][0].distance <= epsilon
status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[new_tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[1], new_ids[0])
assert result[1][0].distance <= epsilon
def test_search_ip_index_params(self, connect, ip_table, get_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
......@@ -117,7 +280,7 @@ class TestSearchBase:
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(ip_table, top_k, nrpobe, query_vec)
status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec)
logging.getLogger().info(result)
if top_k <= 1024:
......@@ -128,6 +291,59 @@ class TestSearchBase:
else:
assert not status.OK()
def test_search_ip_index_params_partition(self, connect, ip_table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search with the given vectors, check the result
expected: search status ok, and the length of the result is top_k
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(ip_table, partition_name, tag)
vectors, ids = self.init_data(connect, ip_table)
status = connect.create_index(ip_table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec)
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert abs(result[0][0].distance - numpy.inner(numpy.array(query_vec[0]), numpy.array(query_vec[0]))) <= gen_inaccuracy(result[0][0].distance)
status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec, partition_tags=[tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result) == 0
def test_search_ip_index_params_partition_A(self, connect, ip_table, get_simple_index_params):
'''
target: test basic search fuction, all the search params is corrent, test all index params, and build
method: search with the given vectors and tag, check the result
expected: search status ok, and the length of the result is top_k
'''
index_params = get_simple_index_params
logging.getLogger().info(index_params)
partition_name = gen_unique_str()
status = connect.create_partition(ip_table, partition_name, tag)
vectors, ids = self.init_data(connect, partition_name)
status = connect.create_index(ip_table, index_params)
query_vec = [vectors[0]]
top_k = 10
nprobe = 1
status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec, partition_tags=[tag])
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
assert abs(result[0][0].distance - numpy.inner(numpy.array(query_vec[0]), numpy.array(query_vec[0]))) <= gen_inaccuracy(result[0][0].distance)
status, result = connect.search_vectors(partition_name, top_k, nprobe, query_vec)
logging.getLogger().info(result)
assert status.OK()
assert len(result[0]) == min(len(vectors), top_k)
assert check_result(result[0], ids[0])
@pytest.mark.level(2)
def test_search_vectors_without_connect(self, dis_connect, table):
'''
......@@ -518,6 +734,14 @@ class TestSearchParamsInvalid(object):
status, result = connect.search_vectors(table_name, top_k, nprobe, query_vecs)
assert not status.OK()
@pytest.mark.level(1)
def test_search_with_invalid_tag_format(self, connect, table):
top_k = 1
nprobe = 1
query_vecs = gen_vectors(1, dim)
with pytest.raises(Exception) as e:
status, result = connect.search_vectors(table_name, top_k, nprobe, query_vecs, partition_tags="tag")
"""
Test search table with invalid top-k
"""
......@@ -574,7 +798,7 @@ class TestSearchParamsInvalid(object):
yield request.param
@pytest.mark.level(1)
def test_search_with_invalid_nrpobe(self, connect, table, get_nprobes):
def test_search_with_invalid_nprobe(self, connect, table, get_nprobes):
'''
target: test search fuction, with the wrong top_k
method: search with top_k
......@@ -592,7 +816,7 @@ class TestSearchParamsInvalid(object):
status, result = connect.search_vectors(table, top_k, nprobe, query_vecs)
@pytest.mark.level(2)
def test_search_with_invalid_nrpobe_ip(self, connect, ip_table, get_nprobes):
def test_search_with_invalid_nprobe_ip(self, connect, ip_table, get_nprobes):
'''
target: test search fuction, with the wrong top_k
method: search with top_k
......
......@@ -297,7 +297,7 @@ class TestTable:
'''
table_name = gen_unique_str("test_table")
status = connect.delete_table(table_name)
assert not status.code==0
assert not status.OK()
def test_delete_table_repeatedly(self, connect):
'''
......
......@@ -13,8 +13,8 @@ from milvus import IndexType, MetricType
dim = 128
index_file_size = 10
add_time_interval = 5
add_time_interval = 3
tag = "1970-01-01"
class TestTableCount:
"""
......@@ -58,6 +58,90 @@ class TestTableCount:
status, res = connect.get_table_row_count(table)
assert res == nb
def test_table_rows_count_partition(self, connect, table, add_vectors_nb):
'''
target: test table rows_count is correct or not
method: create table, create partition and add vectors in it,
assert the value returned by get_table_row_count method is equal to length of vectors
expected: the count is equal to the length of vectors
'''
nb = add_vectors_nb
partition_name = gen_unique_str()
vectors = gen_vectors(nb, dim)
status = connect.create_partition(table, partition_name, tag)
assert status.OK()
res = connect.add_vectors(table_name=table, records=vectors, partition_tag=tag)
time.sleep(add_time_interval)
status, res = connect.get_table_row_count(table)
assert res == nb
def test_table_rows_count_multi_partitions_A(self, connect, table, add_vectors_nb):
'''
target: test table rows_count is correct or not
method: create table, create partitions and add vectors in it,
assert the value returned by get_table_row_count method is equal to length of vectors
expected: the count is equal to the length of vectors
'''
new_tag = "new_tag"
nb = add_vectors_nb
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
vectors = gen_vectors(nb, dim)
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(table, new_partition_name, new_tag)
assert status.OK()
res = connect.add_vectors(table_name=table, records=vectors)
time.sleep(add_time_interval)
status, res = connect.get_table_row_count(table)
assert res == nb
def test_table_rows_count_multi_partitions_B(self, connect, table, add_vectors_nb):
'''
target: test table rows_count is correct or not
method: create table, create partitions and add vectors in one of the partitions,
assert the value returned by get_table_row_count method is equal to length of vectors
expected: the count is equal to the length of vectors
'''
new_tag = "new_tag"
nb = add_vectors_nb
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
vectors = gen_vectors(nb, dim)
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(table, new_partition_name, new_tag)
assert status.OK()
res = connect.add_vectors(table_name=table, records=vectors, partition_tag=tag)
time.sleep(add_time_interval)
status, res = connect.get_table_row_count(partition_name)
assert res == nb
status, res = connect.get_table_row_count(new_partition_name)
assert res == 0
def test_table_rows_count_multi_partitions_C(self, connect, table, add_vectors_nb):
'''
target: test table rows_count is correct or not
method: create table, create partitions and add vectors in one of the partitions,
assert the value returned by get_table_row_count method is equal to length of vectors
expected: the table count is equal to the length of vectors
'''
new_tag = "new_tag"
nb = add_vectors_nb
partition_name = gen_unique_str()
new_partition_name = gen_unique_str()
vectors = gen_vectors(nb, dim)
status = connect.create_partition(table, partition_name, tag)
status = connect.create_partition(table, new_partition_name, new_tag)
assert status.OK()
res = connect.add_vectors(table_name=table, records=vectors, partition_tag=tag)
res = connect.add_vectors(table_name=table, records=vectors, partition_tag=new_tag)
time.sleep(add_time_interval)
status, res = connect.get_table_row_count(partition_name)
assert res == nb
status, res = connect.get_table_row_count(new_partition_name)
assert res == nb
status, res = connect.get_table_row_count(table)
assert res == nb * 2
def test_table_rows_count_after_index_created(self, connect, table, get_simple_index_params):
'''
target: test get_table_row_count, after index have been created
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册