未验证 提交 0ea37b5e 编写于 作者: J Jiawei Wang 提交者: GitHub

Merge branch 'develop' into toolchain0.7

...@@ -60,6 +60,7 @@ This chapter guides you through the installation and deployment steps. It is str ...@@ -60,6 +60,7 @@ This chapter guides you through the installation and deployment steps. It is str
- [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_CN.md) - [Deploy Paddle Serving on Kubernetes](doc/Run_On_Kubernetes_CN.md)
- [Deploy Paddle Serving with Security gateway(Chinese)](doc/Serving_Auth_Docker_CN.md) - [Deploy Paddle Serving with Security gateway(Chinese)](doc/Serving_Auth_Docker_CN.md)
- [Deploy Paddle Serving on more hardwares](doc/Run_On_XPU_EN.md) - [Deploy Paddle Serving on more hardwares](doc/Run_On_XPU_EN.md)
- [Docker Images](doc/Docker_Images_EN.md)
- [Latest Wheel packages](doc/Latest_Packages_CN.md) - [Latest Wheel packages](doc/Latest_Packages_CN.md)
> Use > Use
......
...@@ -56,6 +56,7 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发 ...@@ -56,6 +56,7 @@ Paddle Serving依托深度学习框架PaddlePaddle旨在帮助深度学习开发
- [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes_CN.md) - [在Kuberntes集群上部署Paddle Serving](doc/Run_On_Kubernetes_CN.md)
- [部署Paddle Serving安全网关](doc/Serving_Auth_Docker_CN.md) - [部署Paddle Serving安全网关](doc/Serving_Auth_Docker_CN.md)
- [在异构硬件部署Paddle Serving](doc/Run_On_XPU_CN.md) - [在异构硬件部署Paddle Serving](doc/Run_On_XPU_CN.md)
- [Docker镜像](doc/Docker_Images_CN.md)
- [最新Wheel开发包](doc/Latest_Packages_CN.md) - [最新Wheel开发包](doc/Latest_Packages_CN.md)
> 使用 > 使用
......
...@@ -54,13 +54,21 @@ def kv_to_seqfile(): ...@@ -54,13 +54,21 @@ def kv_to_seqfile():
finally: finally:
fp.close() fp.close()
for line in lines: for line in lines:
line_list = line.split(':') line_list = line.split()
if len(line_list) < 1:
continue
key = int(line_list[0]) key = int(line_list[0])
value = str(line_list[1]).replace('\n', '') show = int(line_list[1])
click = int(line_list[2])
values = [float(x) for x in line_list[3:]]
# str(line_list[1]).replace('\n', '')
res.append(dict) res.append(dict)
key_bytes = struct.pack('Q', key) key_bytes = struct.pack('Q', key)
row_bytes = struct.pack('%ss' % len(value), value) row_bytes = ""
print key, ':', value, '->', key_bytes, ':', row_bytes for v in values:
row_bytes += struct.pack('f', v)
print key, ':', values, '->', key_bytes, ':', row_bytes
writer.write(key_bytes, row_bytes) writer.write(key_bytes, row_bytes)
f.close() f.close()
write_donefile() write_donefile()
......
...@@ -55,7 +55,14 @@ void printSeq(std::string file, int limit) { ...@@ -55,7 +55,14 @@ void printSeq(std::string file, int limit) {
total_count++; total_count++;
int64_t value_length = record.record_len - record.key_len; int64_t value_length = record.record_len - record.key_len;
std::cout << "key: " << key << " , value: " << string_to_hex(record.value.c_str()) << std::endl; float *data_ptr = new float[record.value.size() / 4];
memcpy(data_ptr, record.value.data(), record.value.size());
std::cout << "key: " << key << " , value: " << string_to_hex(record.value.c_str()) << std::endl;
for (int i =0; i < record.value.size() / 4; ++i) {
std::cout << data_ptr[i] << " ";
}
std::cout << std::endl;
delete(data_ptr);
if (total_count >= limit) { if (total_count >= limit) {
break; break;
} }
......
...@@ -3,6 +3,9 @@ ...@@ -3,6 +3,9 @@
Paddle Serving支持使用百度昆仑芯片进行预测部署。目前支持在百度昆仑芯片和arm服务器(如飞腾 FT-2000+/64), 或者百度昆仑芯片和Intel CPU服务器,上进行部署,后续完善对其他异构硬件服务器部署能力。 Paddle Serving支持使用百度昆仑芯片进行预测部署。目前支持在百度昆仑芯片和arm服务器(如飞腾 FT-2000+/64), 或者百度昆仑芯片和Intel CPU服务器,上进行部署,后续完善对其他异构硬件服务器部署能力。
## 安装Docker镜像
我们推荐使用docker部署Serving服务,在xpu环境下可参考[Docker镜像](Docker_Images_CN.md)文档安装xpu镜像,并进一步完成编译、安装和部署等任务。
## 编译、安装 ## 编译、安装
基本环境配置可参考[该文档](Compile_CN.md)进行配置。下面以飞腾FT-2000+/64机器为例进行介绍。 基本环境配置可参考[该文档](Compile_CN.md)进行配置。下面以飞腾FT-2000+/64机器为例进行介绍。
### 编译 ### 编译
......
...@@ -5,6 +5,9 @@ ...@@ -5,6 +5,9 @@
Paddle serving supports deployment using Baidu Kunlun chips. Currently, it supports deployment on the ARM CPU server with Baidu Kunlun chips Paddle serving supports deployment using Baidu Kunlun chips. Currently, it supports deployment on the ARM CPU server with Baidu Kunlun chips
(such as Phytium FT-2000+/64), or Intel CPU with Baidu Kunlun chips. We will improve (such as Phytium FT-2000+/64), or Intel CPU with Baidu Kunlun chips. We will improve
the deployment capability on various heterogeneous hardware servers in the future. the deployment capability on various heterogeneous hardware servers in the future.
## Install docker images
We recommend using the docker deployment service. In the xpu environment, you can refer to the [Docker image document](Docker_Images_EN.md) to install the xpu image, and further complete tasks such as construction, installation, and deployment.
## Compilation and installation ## Compilation and installation
Refer to [compile](./Compile_EN.md) document to setup the compilation environment. The following is based on FeiTeng FT-2000 +/64 platform. Refer to [compile](./Compile_EN.md) document to setup the compilation environment. The following is based on FeiTeng FT-2000 +/64 platform.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册