提交 2a21d8b3 编写于 作者: D dangqingqing

Merge branch 'develop' of https://github.com/baidu/Paddle into cn_doc

......@@ -7,18 +7,14 @@
hooks:
- id: yapf
- repo: https://github.com/pre-commit/pre-commit-hooks
sha: 4ef03c4223ad322c7adaa6c6c0efb26b57df3b71
sha: 7539d8bd1a00a3c1bfd34cdb606d3a6372e83469
hooks:
- id: check-added-large-files
- id: check-merge-conflict
- id: check-symlinks
- id: detect-private-key
- id: end-of-file-fixer
# TODO(yuyang): trailing whitespace has some bugs on markdown
# files now, please not add it to pre-commit hook now
# - id: trailing-whitespace
#
# TODO(yuyang): debug-statements not fit for Paddle, because
# not all of our python code is runnable. Some are used for
# documenation
# - id: debug-statements
- repo: https://github.com/PaddlePaddle/clang-format-pre-commit-hook.git
sha: 28c0ea8a67a3e2dbbf4822ef44e85b63a0080a29
hooks:
- id: clang-formater
# PaddlePaddle
[![Build Status](https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop)](https://travis-ci.org/baidu/Paddle)
[![Build Status](https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop)](https://travis-ci.org/PaddlePaddle/Paddle)
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://www.paddlepaddle.org/)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://www.paddlepaddle.org/cn/index.html)
[![Coverage Status](https://coveralls.io/repos/github/PaddlePaddle/Paddle/badge.svg?branch=develop)](https://coveralls.io/github/baidu/Paddle?branch=develop)
[![Release](https://img.shields.io/github/release/baidu/Paddle.svg?colorB=fedcba)](https://github.com/baidu/Paddle/releases)
[![Coverage Status](https://coveralls.io/repos/github/PaddlePaddle/Paddle/badge.svg?branch=develop)](https://coveralls.io/github/PaddlePaddle/Paddle?branch=develop)
[![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle.svg)](https://github.com/PaddlePaddle/Paddle/releases)
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
......@@ -17,7 +17,7 @@ developed by Baidu scientists and engineers for the purpose of applying deep
learning to many products at Baidu.
Our vision is to enable deep learning for everyone via PaddlePaddle.
Please refer to our [release announcement](https://github.com/baidu/Paddle/releases) to track the latest feature of PaddlePaddle.
Please refer to our [release announcement](https://github.com/PaddlePaddle/Paddle/releases) to track the latest feature of PaddlePaddle.
## Features
......@@ -92,7 +92,7 @@ Both [English Docs](http://paddlepaddle.org/doc/) and [Chinese Docs](http://padd
## Ask Questions
You are welcome to submit questions and bug reports as [Github Issues](https://github.com/baidu/paddle/issues).
You are welcome to submit questions and bug reports as [Github Issues](https://github.com/PaddlePaddle/Paddle/issues).
## Copyright and License
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).
......@@ -6,10 +6,10 @@ Installing from Sources
* [3. Build on Ubuntu](#ubuntu)
## <span id="download">Download and Setup</span>
You can download PaddlePaddle from the [github source](https://github.com/gangliao/Paddle).
You can download PaddlePaddle from the [github source](https://github.com/PaddlePaddle/Paddle).
```bash
git clone https://github.com/baidu/Paddle paddle
git clone https://github.com/PaddlePaddle/Paddle paddle
cd paddle
```
......
MKL_ROOT,mkl的路径,在${MKL_ROOT}/include下需要包含mkl.h,在${MKL_ROOT}/lib目录下需要包含 mkl_core,mkl_sequential和mkl_intel_lp64三个库
ATLAS_ROOT,ATLAS库的路径,在${ATLAS_ROOT}/include下需要包含cblas.h,而在${ATLAS_ROOT}/lib下需要包含cblas和atlas两个库
OPENBLAS_ROOT,在${OPENBLAS_ROOT}/include下需要包含cblas.h,而在${OPENBLAS_ROOT}/lib下需要包含openblas库
REFERENCE_CBLAS_ROOT,在${REFERENCE_CBLAS_ROOT}/include下需要包含cblas.h,在${REFERENCE_CBLAS_ROOT}/lib下需要包含cblas库
\ No newline at end of file
编译选项,描述,注意
MKL_ROOT,MKL的路径,${MKL_ROOT}/include下需要包含mkl.h,${MKL_ROOT}/lib目录下需要包含mkl_core,mkl_sequential和mkl_intel_lp64三个库。
ATLAS_ROOT,ATLAS的路径,${ATLAS_ROOT}/include下需要包含cblas.h,${ATLAS_ROOT}/lib下需要包含cblas和atlas两个库。
OPENBLAS_ROOT,OpenBLAS的路径,${OPENBLAS_ROOT}/include下需要包含cblas.h,${OPENBLAS_ROOT}/lib下需要包含openblas库。
REFERENCE_CBLAS_ROOT,REFERENCE BLAS的路径,${REFERENCE_CBLAS_ROOT}/include下需要包含cblas.h,${REFERENCE_CBLAS_ROOT}/lib下需要包含cblas库。
\ No newline at end of file
选项,说明,默认值
WITH_GPU,是否编译GPU支持。,是否寻找到cuda工具链
WITH_GPU,是否支持GPU。,取决于是否寻找到CUDA工具链
WITH_DOUBLE,是否使用双精度浮点数。,否
WITH_DSO,是否使用运行时动态加载cuda动态库,而非静态加载cuda动态库。,是
WITH_AVX,是否编译含有AVX指令集的PaddlePaddle二进制,是
WITH_PYTHON,是否内嵌python解释器。可以方便嵌入式工作。,是
WITH_DSO,是否运行时动态加载CUDA动态库,而非静态加载CUDA动态库。,是
WITH_AVX,是否编译含有AVX指令集的PaddlePaddle二进制文件,是
WITH_PYTHON,是否内嵌PYTHON解释器。方便今后的嵌入式移植工作。,是
WITH_STYLE_CHECK,是否编译时进行代码风格检查,是
WITH_RDMA,是否开启RDMA支持,否
WITH_GLOG,是否使用GLOG,如果不使用则会使用一个简化版的日志实现。可以方便嵌入式工作。,取决于是否寻找到GLOG
WITH_GFLAGS,是否使用GFLAGS,如果不使用则会使用一个简化版的命令行参数解析。可以方便嵌入式工作。,取决于是否寻找到GFLAGS
WITH_TIMER,是否开启计时功能开启计时功能会导致运行略慢,打印的日志变多。但是方便调试和benchmark,否
WITH_TESTING,是否开启单元测试,取决于是否寻找到gtest
WITH_DOC,是否编译英文文档,否
WITH_DOC_CN,是否编译中文文档,否
WITH_SWIG_PY,是否编译python的swig接口,python的swig接口可以方便进行预测和定制化训练,取决于是否找到swig
WITH_RDMA,是否开启RDMA,否
WITH_GLOG,是否开启GLOG。如果不开启,则会使用一个简化版的日志,同时方便今后的嵌入式移植工作。,取决于是否寻找到GLOG
WITH_GFLAGS,是否使用GFLAGS。如果不开启,则会使用一个简化版的命令行参数解析器,同时方便今后的嵌入式移植工作。,取决于是否寻找到GFLAGS
WITH_TIMER,是否开启计时功能。如果开启会导致运行略慢,打印的日志变多,但是方便调试和测Benchmark,否
WITH_TESTING,是否开启单元测试,取决于是否寻找到GTEST
WITH_DOC,是否编译中英文文档,否
WITH_SWIG_PY,是否编译PYTHON的SWIG接口,该接口可用于预测和定制化训练,取决于是否寻找到SWIG
\ No newline at end of file
设置PaddlePaddle的编译选项
==========================
PaddlePaddle的编译选项
======================
PaddlePaddle的编译选项可以在调用cmake的时候设置。cmake是一个跨平台的编译脚本,调用
cmake可以将cmake项目文件,生成各个平台的makefile。详细的cmake使用方法可以参考
`cmake的官方文档 <https://cmake.org/cmake-tutorial>`_ 。
PaddlePaddle的编译选项,包括生成CPU/GPU二进制文件、链接何种BLAS库等。用户可在调用cmake的时候设置它们,详细的cmake使用方法可以参考 `官方文档 <https://cmake.org/cmake-tutorial>`_ 。
PaddlePaddle的编译选项是可以控制PaddlePaddle生成CPU/GPU版本二进制,链接何种blas等等。所有的
编译选项列表如下
Bool型的编译选项
----------------
用户可在cmake的命令行中,通过使用 ``-D`` 命令设置该类编译选项,例如
PaddlePaddle的编译选项
----------------------
.. code-block:: bash
bool型的编译选项
++++++++++++++++
设置下列编译选项时,可以在cmake的命令行设置。使用 -D命令即可。例如
:code:`cmake -D WITH_GPU=OFF`
cmake .. -DWITH_GPU=OFF
.. csv-table:: PaddlePaddle的bool型编译选项
.. csv-table:: Bool型的编译选项
:widths: 1, 7, 2
:file: compile_options.csv
blas相关的编译选项
++++++++++++++++++
PaddlePaddle可以使用 `MKL <https://software.intel.com/en-us/intel-mkl>`_ ,
`Atlas <http://math-atlas.sourceforge.net/>`_ ,
`OpenBlas <http://www.openblas.net/>`_ 和
`refference Blas <http://www.netlib.org/blas/>`_ ,任意一种cblas实现。
通过编译时指定路径来实现引用各种blas。
BLAS/CUDA/Cudnn的编译选项
--------------------------
BLAS
+++++
cmake编译时会首先在系统路径(/usr/lib\:/usr/local/lib)中寻找这些blas的实现。同时
也会读取相关路径变量来进行搜索。路径变量为\:
PaddlePaddle支持以下任意一种BLAS库:`MKL <https://software.intel.com/en-us/intel-mkl>`_ ,`ATLAS <http://math-atlas.sourceforge.net/>`_ ,`OpenBlAS <http://www.openblas.net/>`_ 和 `REFERENCE BLAS <http://www.netlib.org/blas/>`_ 。
.. csv-table:: PaddlePaddle的cblas编译选项
:widths: 1, 9
:header: "编译选项", "描述"
.. csv-table:: BLAS路径相关的编译选项
:widths: 1, 2, 7
:file: cblas_settings.csv
这些变量均可以使用 -D命令指定。例如 :code:`cmake -D MKL_ROOT=/opt/mkl/`。这些变
量也可以通过调用cmake命令前通过环境变量指定。例如
.. code-block:: bash
CUDA/Cudnn
+++++++++++
export MKL_ROOT=/opt/mkl
cmake
PaddlePaddle可以使用cudnn v2之后的任何一个版本来编译运行,但尽量请保持编译和运行使用的cudnn是同一个版本。 我们推荐使用最新版本的cudnn v5.1。
需要注意的是,这些变量只在第一次cmake的时候有效。如果在第一次cmake之后想要重新设
置这些变量,推荐清理( :code:`rm -rf` )掉编译目录后,再指定。
编译选项的设置
++++++++++++++
cuda/cudnn相关的编译选项
++++++++++++++++++++++++
PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/Cudnn库。cmake编译时,首先在系统路径(/usr/lib\:/usr/local/lib)中搜索这几个库,同时也会读取相关路径变量来进行搜索。 通过使用 ``-D`` 命令可以设置,例如
PaddlePaddle可以使用 cudnn v2之后的任何一个cudnn版本来编译运行。但需要注意的是编译和
运行使用的cudnn尽量是同一个版本。推荐使用最新版本的cudnn v5.1。
.. code-block:: bash
在cmake配置时可以使用 :code:`CUDNN_ROOT` 来配置CUDNN的安装路径。使用的命令也是
-D,例如 :code:`cmake -D CUDNN_ROOT=/opt/cudnnv5` 。
cmake .. -DMKL_ROOT=/opt/mkl/ -DCUDNN_ROOT=/opt/cudnnv5
需要注意的是,这些变量只在第一次cmake的时候有效。如果在第一次cmake之后想要重新设
置这些变量,推荐清理( :code:`rm -rf` )掉编译目录后,再指定。
注意:这几个编译选项的设置,只在第一次cmake的时候有效。如果之后想要重新设置,推荐清理整个编译目录(``rm -rf``)后,再指定。
\ No newline at end of file
......@@ -2,32 +2,19 @@
如何贡献/修改PaddlePaddle的文档
###############################
PaddlePaddle的文档使用 `cmake`_ 驱动 `sphinx`_ 生成。公有两个文档,:code:`doc` 和 :code:`doc_cn` 。这两者会在 `cmake`_ 中进行编译,生成后的文档会存储在服务器的 :code:`doc` 和 :code:`doc_cn` 两个目录下。
PaddlePaddle的文档包括英文文档 ``doc`` 和中文文档 ``doc_cn`` 两个部分。文档都是通过 `cmake`_ 驱动 `sphinx`_ 编译生成,生成后的文档分别存储在编译目录的 ``doc`` 和 ``doc_cn`` 两个子目录下。
下面分几个部分介绍一下PaddlePaddle文档的贡献方法。
如何书写PaddlePaddle的文档
==========================
TBD
如何构建PaddlePaddle的文档
==========================
构建PaddlePaddle文档,需要使用构建Paddle的全部环境。准备这个环境相对来说比较复杂,所以本文档提供两种方式构建PaddlePaddle的文档,即
* 使用Docker构建PaddlePaddle的文档
* 直接构建PaddlePaddle的文档。
并且,我们推荐使用Docker来构建PaddlePaddle的文档。
PaddlePaddle的文档构建有直接构建和基于Docker构建两种方式。构建PaddlePaddle文档需要准备的环境相对较复杂,所以我们推荐使用基于Docker来构建PaddlePaddle的文档。
使用Docker构建PaddlePaddle的文档
--------------------------------
使用Docker构建PaddlePaddle的文档,首先要求在系统里安装好Docker工具包。安装Docker请参考 `Docker的官网 <https://docs.docker.com/>`_ 。
安装好Docker之后可以使用源码目录下的脚本构建文档,即
使用Docker构建PaddlePaddle的文档,需要在系统里先安装好Docker工具包。Docker安装请参考 `Docker的官网 <https://docs.docker.com/>`_ 。安装好Docker之后可以使用源码目录下的脚本构建文档,即
.. code-block:: bash
......@@ -35,10 +22,10 @@ TBD
cd paddle/scripts/tools/build_docs
bash build_docs.sh
执行完这个脚本后,该目录下会生成两个目录,分别是\:
编译完成后,该目录下会生成如下两个子目录\:
* doc 目录,英文文档地址
* doc_cn 目录,中文文档地址
* doc 英文文档目录
* doc_cn 中文文档目录
打开浏览器访问对应目录下的index.html即可访问本地文档。
......@@ -52,6 +39,10 @@ TBD
TBD
如何书写PaddlePaddle的文档
==========================
TBD
如何更新www.paddlepaddle.org文档
================================
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "PaddleAPIPrivate.h"
......@@ -112,7 +111,7 @@ void Arguments::setSlotSequenceStartPositions(size_t idx,
}
void Arguments::setSlotSubSequenceStartPositions(
size_t idx, IVector *vec) throw(RangeError) {
size_t idx, IVector* vec) throw(RangeError) {
auto& a = m->getArg(idx);
auto& v = m->cast<paddle::IVector>(vec->getSharedPtr());
a.subSequenceStartPositions = std::make_shared<paddle::ICpuGpuVector>(v);
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "PaddleAPIPrivate.h"
#include "paddle/trainer/Trainer.h"
......@@ -44,8 +43,7 @@ TrainerConfig* TrainerConfig::createFromTrainerConfigFile(
return retv;
}
TrainerConfig* TrainerConfig::createFromProtoString(
const std::string& str) {
TrainerConfig* TrainerConfig::createFromProtoString(const std::string& str) {
auto retv = new TrainerConfig();
paddle::TrainerConfig trainerConfigProto;
auto conf = std::make_shared<paddle::TrainerConfigHelper>(trainerConfigProto);
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "PaddleAPIPrivate.h"
......@@ -27,7 +26,8 @@ GradientMachine::GradientMachine() : m(new GradientMachinePrivate()) {}
GradientMachine::~GradientMachine() { delete m; }
GradientMachine* GradientMachine::createFromPaddleModelPtr(
const void* confPtr, GradientMatchineCreateMode mode,
const void* confPtr,
GradientMatchineCreateMode mode,
const std::vector<int>& types) {
auto& conf = *(const paddle::ModelConfig*)(confPtr);
std::vector<ParameterType> realTypes;
......@@ -44,7 +44,8 @@ GradientMachine* GradientMachine::createFromPaddleModelPtr(
}
GradientMachine* GradientMachine::createByConfigProtoStr(
const std::string& protoStr, GradientMatchineCreateMode mode,
const std::string& protoStr,
GradientMatchineCreateMode mode,
const std::vector<int>& types) {
paddle::ModelConfig conf;
conf.ParseFromString(protoStr);
......@@ -56,13 +57,15 @@ GradientMachine* GradientMachine::createByConfigProtoStr(
}
GradientMachine* GradientMachine::createByModelConfig(
ModelConfig* conf, GradientMatchineCreateMode mode,
ModelConfig* conf,
GradientMatchineCreateMode mode,
const std::vector<int>& types) {
auto confPtr = &conf->m->conf->getModelConfig();
return GradientMachine::createFromPaddleModelPtr(confPtr, mode, types);
}
void GradientMachine::forward(const Arguments& inArgs, Arguments* outArgs,
void GradientMachine::forward(const Arguments& inArgs,
Arguments* outArgs,
PassType passType) {
auto& in =
m->cast<std::vector<paddle::Argument>>(inArgs.getInternalArgumentsPtr());
......@@ -99,7 +102,8 @@ void GradientMachine::backward(const UpdateCallback& callback) {
}
void GradientMachine::forwardBackward(const Arguments& inArgs,
Arguments* outArgs, PassType passType,
Arguments* outArgs,
PassType passType,
const UpdateCallback& callback) {
auto& in =
m->cast<std::vector<paddle::Argument>>(inArgs.getInternalArgumentsPtr());
......@@ -140,8 +144,11 @@ Matrix* GradientMachine::getLayerOutput(const std::string& layerName) const
}
SequenceGenerator* GradientMachine::asSequenceGenerator(
const std::vector<std::string>& dict, size_t begin_id, size_t end_id,
size_t max_length, size_t beam_size) {
const std::vector<std::string>& dict,
size_t begin_id,
size_t end_id,
size_t max_length,
size_t beam_size) {
SequenceGenerator* r =
SequenceGenerator::createByGradientMachineSharedPtr(&m->machine);
r->setDict(dict);
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include "PaddleAPI.h"
......@@ -23,7 +22,8 @@ limitations under the License. */
template <typename T1, typename T2>
void staticCastVector(std::vector<T2>* dest, const std::vector<T1>& src) {
dest->resize(src.size());
std::transform(src.begin(), src.end(), dest->begin(), [](T1 t){
return static_cast<T2>(t);
});
std::transform(src.begin(),
src.end(),
dest->begin(),
[](T1 t) { return static_cast<T2>(t); });
}
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "paddle/math/Matrix.h"
#include "paddle/math/SparseMatrix.h"
......@@ -44,17 +43,21 @@ Matrix* Matrix::createZero(size_t height, size_t width, bool useGpu) {
return m;
}
Matrix* Matrix::createDense(const std::vector<float>& data, size_t height,
size_t width, bool useGpu) {
Matrix* Matrix::createDense(const std::vector<float>& data,
size_t height,
size_t width,
bool useGpu) {
auto m = new Matrix();
m->m->mat = paddle::Matrix::create(height, width, useGpu);
m->m->mat->copyFrom(data.data(), data.size());
return m;
}
Matrix* Matrix::createDenseFromNumpy(float* data, int dim1, int dim2,
bool copy, bool useGpu)
throw (UnsupportError) {
Matrix* Matrix::createDenseFromNumpy(float* data,
int dim1,
int dim2,
bool copy,
bool useGpu) throw(UnsupportError) {
if (useGpu) {
/// Gpu mode only supports copy=True
if (!copy) {
......@@ -66,7 +69,9 @@ Matrix* Matrix::createDenseFromNumpy(float* data, int dim1, int dim2,
}
}
Matrix* Matrix::createCpuDenseFromNumpy(float* data, int dim1, int dim2,
Matrix* Matrix::createCpuDenseFromNumpy(float* data,
int dim1,
int dim2,
bool copy) {
auto m = new Matrix();
if (copy) {
......@@ -85,12 +90,20 @@ Matrix* Matrix::createGpuDenseFromNumpy(float* data, int dim1, int dim2) {
return m;
}
Matrix* Matrix::createSparse(size_t height, size_t width, size_t nnz,
bool isNonVal, bool isTrans, bool useGpu) {
Matrix* Matrix::createSparse(size_t height,
size_t width,
size_t nnz,
bool isNonVal,
bool isTrans,
bool useGpu) {
auto m = new Matrix();
m->m->mat = paddle::Matrix::createSparseMatrix(
height, width, nnz, isNonVal ? paddle::NO_VALUE : paddle::FLOAT_VALUE,
isTrans, useGpu);
height,
width,
nnz,
isNonVal ? paddle::NO_VALUE : paddle::FLOAT_VALUE,
isTrans,
useGpu);
return m;
}
......@@ -221,7 +234,8 @@ FloatArray Matrix::getData() const {
}
void Matrix::sparseCopyFrom(
const std::vector<int>& rows, const std::vector<int>& cols,
const std::vector<int>& rows,
const std::vector<int>& cols,
const std::vector<float>& vals) throw(UnsupportError) {
auto cpuSparseMat =
std::dynamic_pointer_cast<paddle::CpuSparseMatrix>(m->mat);
......@@ -240,7 +254,8 @@ void Matrix::sparseCopyFrom(
void* Matrix::getSharedPtr() const { return &m->mat; }
void Matrix::toNumpyMatInplace(float** view_data, int* dim1,
void Matrix::toNumpyMatInplace(float** view_data,
int* dim1,
int* dim2) throw(UnsupportError) {
auto cpuMat = std::dynamic_pointer_cast<paddle::CpuMatrix>(m->mat);
if (cpuMat) {
......@@ -251,7 +266,8 @@ void Matrix::toNumpyMatInplace(float** view_data, int* dim1,
throw UnsupportError();
}
}
void Matrix::copyToNumpyMat(float** view_m_data, int* dim1,
void Matrix::copyToNumpyMat(float** view_m_data,
int* dim1,
int* dim2) throw(UnsupportError) {
static_assert(sizeof(paddle::real) == sizeof(float),
"Currently PaddleAPI only support for single "
......@@ -269,8 +285,8 @@ void Matrix::copyToNumpyMat(float** view_m_data, int* dim1,
} else if (auto gpuMat = dynamic_cast<paddle::GpuMatrix*>(m->mat.get())) {
auto src = gpuMat->getData();
auto dest = *view_m_data;
hl_memcpy_device2host(dest, src,
sizeof(paddle::real) * (*dim1) * (*dim2));
hl_memcpy_device2host(
dest, src, sizeof(paddle::real) * (*dim1) * (*dim2));
} else {
LOG(WARNING) << "Unexpected Situation";
throw UnsupportError();
......@@ -278,7 +294,8 @@ void Matrix::copyToNumpyMat(float** view_m_data, int* dim1,
}
}
void Matrix::copyFromNumpyMat(float* data, int dim1,
void Matrix::copyFromNumpyMat(float* data,
int dim1,
int dim2) throw(UnsupportError, RangeError) {
if (isSparse()) {
throw UnsupportError();
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include <stddef.h>
......@@ -61,8 +60,8 @@ class RangeError {};
/// Not support Error, such as access GPU memory directly, etc.
class UnsupportError : public std::runtime_error {
public:
UnsupportError() : std::runtime_error(" ") {};
UnsupportError(const std::string& message) : std::runtime_error(message) {};
UnsupportError() : std::runtime_error(" "){};
UnsupportError(const std::string& message) : std::runtime_error(message){};
};
/// This type will map to python's list of float.
......@@ -112,7 +111,8 @@ public:
/**
* Create A Matrix with height,width, which is filled by zero.
*/
static Matrix* createZero(size_t height, size_t width,
static Matrix* createZero(size_t height,
size_t width,
bool useGpu = isUsingGpu());
/**
......@@ -124,8 +124,11 @@ public:
*
* @note the default sparse type is SPARSE_CSR.
*/
static Matrix* createSparse(size_t height, size_t width, size_t nnz,
bool isNonVal = true, bool trans = false,
static Matrix* createSparse(size_t height,
size_t width,
size_t nnz,
bool isNonVal = true,
bool trans = false,
bool useGpu = isUsingGpu());
/**
......@@ -134,13 +137,17 @@ public:
* @param data list of float should be passed in python.
* @note the value will be copy into a new matrix.
*/
static Matrix* createDense(const std::vector<float>& data, size_t height,
size_t width, bool useGpu = isUsingGpu());
static Matrix* createDense(const std::vector<float>& data,
size_t height,
size_t width,
bool useGpu = isUsingGpu());
static Matrix* createDenseFromNumpy(float* data, int dim1, int dim2,
static Matrix* createDenseFromNumpy(
float* data,
int dim1,
int dim2,
bool copy = true,
bool useGpu = isUsingGpu())
throw (UnsupportError);
bool useGpu = isUsingGpu()) throw(UnsupportError);
/**
* Create Cpu Dense Matrix from numpy matrix, dtype=float32
......@@ -151,7 +158,9 @@ public:
* @param copy true if copy into a new matrix, false will create
* matrix inplace.
*/
static Matrix* createCpuDenseFromNumpy(float* data, int dim1, int dim2,
static Matrix* createCpuDenseFromNumpy(float* data,
int dim1,
int dim2,
bool copy = false);
/// Create Gpu Dense Matrix from numpy matrix, dtype=float32
......@@ -171,11 +180,13 @@ public:
* numpy_mat = m.toNumpyMat()
* @endcode
*/
void toNumpyMatInplace(float** view_data, int* dim1,
void toNumpyMatInplace(float** view_data,
int* dim1,
int* dim2) throw(UnsupportError);
/// Copy To numpy mat.
void copyToNumpyMat(float** view_m_data, int* dim1,
void copyToNumpyMat(float** view_m_data,
int* dim1,
int* dim2) throw(UnsupportError);
/// Copy From Numpy Mat
......@@ -248,15 +259,18 @@ public:
static Vector* create(const std::vector<float>& data,
bool useGpu = isUsingGpu());
static Vector* createVectorFromNumpy(float* data, int dim, bool copy = true,
bool useGpu = isUsingGpu())
throw (UnsupportError);
static Vector* createVectorFromNumpy(
float* data,
int dim,
bool copy = true,
bool useGpu = isUsingGpu()) throw(UnsupportError);
/**
* Create Cpu Vector from numpy array, which dtype=float32
*
* If copy is false, it will create vector inplace.
*/
static Vector* createCpuVectorFromNumpy(float* data, int dim,
static Vector* createCpuVectorFromNumpy(float* data,
int dim,
bool copy = false);
/// Create Gpu Vector from numpy array, which dtype=float32
......@@ -312,16 +326,19 @@ public:
static IVector* create(const std::vector<int>& data,
bool useGpu = isUsingGpu());
static IVector* createVectorFromNumpy(int* data, int dim, bool copy = true,
bool useGpu = isUsingGpu())
throw (UnsupportError);
static IVector* createVectorFromNumpy(
int* data,
int dim,
bool copy = true,
bool useGpu = isUsingGpu()) throw(UnsupportError);
/**
* Create Cpu IVector from numpy array, which dtype=int32
*
* If copy is false, it will create vector inplace
*/
static IVector* createCpuVectorFromNumpy(int* data, int dim,
static IVector* createCpuVectorFromNumpy(int* data,
int dim,
bool copy = false);
/**
* Create Gpu IVector from numpy array, which dtype=int32
......@@ -605,7 +622,8 @@ class ParameterTraverseCallback {
public:
~ParameterTraverseCallback();
void apply(const std::vector<Vector*>& vecs, const ParameterConfig& config,
void apply(const std::vector<Vector*>& vecs,
const ParameterConfig& config,
size_t sparseId);
private:
......@@ -638,7 +656,8 @@ public:
void finishBatch();
void update(const std::vector<Vector*>& vecs, const ParameterConfig& conf,
void update(const std::vector<Vector*>& vecs,
const ParameterConfig& conf,
size_t sparseId = NO_SPARSE_ID);
std::vector<int> getParameterTypes() const;
......@@ -678,7 +697,8 @@ public:
* model config by TrainerConfig
*/
static GradientMachine* createByModelConfig(
ModelConfig* conf, GradientMatchineCreateMode mode = CREATE_MODE_NORMAL,
ModelConfig* conf,
GradientMatchineCreateMode mode = CREATE_MODE_NORMAL,
const std::vector<int>& parameterTypes = defaultParamTypes);
/**
......@@ -701,7 +721,8 @@ public:
/**
* Combine forward/backward
*/
void forwardBackward(const Arguments& inArgs, Arguments* outArgs,
void forwardBackward(const Arguments& inArgs,
Arguments* outArgs,
PassType passType,
const UpdateCallback& callback = UpdateCallback());
......@@ -722,14 +743,17 @@ public:
*/
SequenceGenerator* asSequenceGenerator(
const std::vector<std::string>& dict = std::vector<std::string>(),
size_t begin_id = 0UL, size_t end_id = 0UL, size_t max_length = 100UL,
size_t begin_id = 0UL,
size_t end_id = 0UL,
size_t max_length = 100UL,
size_t beam_size = -1UL);
private:
GradientMachinePrivate* m;
static GradientMachine* createFromPaddleModelPtr(
const void* confPtr, GradientMatchineCreateMode mode,
const void* confPtr,
GradientMatchineCreateMode mode,
const std::vector<int>& types);
// Not to use c++ 11 init-list, so we use static var as function default arg.
......@@ -751,8 +775,8 @@ public:
/// Create A Trainer By TrainerConfig. using paddle command line.
static Trainer* createByCommandLine() throw(IOError);
static Trainer* create(TrainerConfig* optConfig, GradientMachine* gm)
throw(IOError);
static Trainer* create(TrainerConfig* optConfig,
GradientMachine* gm) throw(IOError);
/// Start training
void startTrain();
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "paddle/parameter/Parameter.h"
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "PaddleAPIPrivate.h"
#include "paddle/parameter/ParameterOptimizer.h"
......@@ -32,11 +31,15 @@ struct ParameterTraverseCallbackPrivate {
const paddle::ParameterOptimizer::TraverseCallback& callback)
: callback(callback) {}
void apply(const std::vector<Vector*>& vecs, const ParameterConfig& conf,
void apply(const std::vector<Vector*>& vecs,
const ParameterConfig& conf,
size_t sparseId) {
std::vector<paddle::VectorPtr> real_vecs;
real_vecs.resize(vecs.size());
std::transform(vecs.begin(), vecs.end(), real_vecs.begin(), [](Vector* v) {
std::transform(vecs.begin(),
vecs.end(),
real_vecs.begin(),
[](Vector* v) {
if (v) {
return *(paddle::VectorPtr*)(v->getSharedPtr());
} else {
......@@ -86,9 +89,11 @@ void ParameterOptimizer::startBatch(size_t numSamplesProcessed) {
void ParameterOptimizer::finishBatch() { m->optimizer->finishBatch(); }
void ParameterOptimizer::update(const std::vector<Vector*>& vecs,
const ParameterConfig& conf, size_t sparseId) {
ParameterTraverseCallbackPrivate invoker([&](
const paddle::VectorPtr _vecs[], const paddle::ParameterConfig& config,
const ParameterConfig& conf,
size_t sparseId) {
ParameterTraverseCallbackPrivate invoker(
[&](const paddle::VectorPtr _vecs[],
const paddle::ParameterConfig& config,
size_t sid = -1UL) { m->optimizer->update(_vecs, config, sid); });
invoker.apply(vecs, conf, sparseId);
}
......@@ -116,8 +121,9 @@ void ParameterTraverseCallback::apply(const std::vector<Vector*>& vecs,
ParameterTraverseCallback* ParameterOptimizer::needSpecialTraversal(
const ParameterConfig& config) const {
auto& param_config = *(paddle::ParameterConfig*)const_cast<ParameterConfig&>(
config).getRawPtr();
auto& param_config =
*(paddle::ParameterConfig*)const_cast<ParameterConfig&>(config)
.getRawPtr();
auto callback = m->optimizer->needSpecialTraversal(param_config);
if (callback) {
auto retCallback = new ParameterTraverseCallback();
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "paddle/gserver/gradientmachines/GradientMachine.h"
#include "paddle/parameter/Argument.h"
......@@ -42,8 +41,10 @@ struct Path {
// position
static void findNBest(paddle::GradientMachine* gradMachine,
std::vector<paddle::Argument>& inArgs,
std::vector<Path>& finalPaths, size_t bos_id,
size_t eos_id, size_t max_length) {
std::vector<Path>& finalPaths,
size_t bos_id,
size_t eos_id,
size_t max_length) {
std::vector<Path> paths;
Path emptyPath;
paths.push_back(emptyPath);
......@@ -166,7 +167,8 @@ public:
if (id < getSize()) {
Path& p = (*path_)[id];
std::ostringstream sout;
std::transform(p.ids.begin(), p.ids.end(),
std::transform(p.ids.begin(),
p.ids.end(),
std::ostream_iterator<std::string>(sout, split ? " " : ""),
[&](int id) { return (*dict_)[id]; });
return sout.str();
......
......@@ -64,12 +64,11 @@ Trainer* Trainer::createByCommandLine() throw(IOError) {
Trainer::Trainer(TrainerConfig* config, GradientMachine* gm)
: m(new TrainerPrivate()) {
m->init(config->m->conf, /* testing= */false, gm ? gm->m->machine : nullptr);
m->init(config->m->conf, /* testing= */ false, gm ? gm->m->machine : nullptr);
}
Trainer* Trainer::create(TrainerConfig* config, GradientMachine* gm)
throw(IOError)
{
Trainer* Trainer::create(TrainerConfig* config,
GradientMachine* gm) throw(IOError) {
auto retv = new Trainer(config, gm);
if (retv->m->getConfig().IsInitialized()) {
return retv;
......@@ -140,7 +139,9 @@ Matrix* Trainer::getLayerOutput(const std::string& layerName) {
return Matrix::createByPaddleMatrixPtr(&m);
}
void Trainer::forwardOneBatch(size_t batchSize) { m->forwardOneBatch(batchSize); }
void Trainer::forwardOneBatch(size_t batchSize) {
m->forwardOneBatch(batchSize);
}
bool TrainerPrivate::forwardOneBatch(size_t batchSize) {
CHECK(dataProvider_) << "data_provider is not specified";
......@@ -156,7 +157,6 @@ bool TrainerPrivate::forwardOneBatch(size_t batchSize) {
void TrainerPrivate::forwardOneDataBatch(
const std::vector<paddle::Argument>& inArgs) {
std::vector<paddle::Argument>& outArgs = forwardOutput_;
if (config_->getOptConfig().use_sparse_remote_updater()) {
......
......@@ -37,13 +37,15 @@ FloatArray::FloatArray(const float* b, const size_t l)
IntArray::IntArray(const int* b, const size_t l, bool f)
: buf(b), length(l), needFree(f) {}
IntWithFloatArray::IntWithFloatArray(const float* v, const int* i, size_t l,
IntWithFloatArray::IntWithFloatArray(const float* v,
const int* i,
size_t l,
bool f)
: valBuf(v), idxBuf(i), length(l), needFree(f) {}
bool isUsingGpu() {return FLAGS_use_gpu;}
bool isUsingGpu() { return FLAGS_use_gpu; }
void setUseGpu(bool useGpu) {FLAGS_use_gpu = useGpu;}
void setUseGpu(bool useGpu) { FLAGS_use_gpu = useGpu; }
bool isGpuVersion() {
#ifdef PADDLE_ONLY_CPU
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "PaddleAPI.h"
#include "paddle/math/Vector.h"
......@@ -39,8 +38,10 @@ IVector* IVector::create(const std::vector<int>& data, bool useGpu) {
return v;
}
IVector* IVector::createVectorFromNumpy(int* data, int dim, bool copy,
bool useGpu) throw (UnsupportError){
IVector* IVector::createVectorFromNumpy(int* data,
int dim,
bool copy,
bool useGpu) throw(UnsupportError) {
if (useGpu) {
/// if use gpu only copy=true is supported
if (!copy) {
......@@ -137,8 +138,8 @@ void IVector::copyToNumpyArray(int** view_m_data, int* dim1) {
if (auto cpuVec = dynamic_cast<paddle::CpuIVector*>(m->vec.get())) {
std::memcpy(*view_m_data, cpuVec->getData(), sizeof(int) * (*dim1));
} else if (auto gpuVec = dynamic_cast<paddle::GpuIVector*>(m->vec.get())) {
hl_memcpy_device2host(*view_m_data, gpuVec->getData(),
sizeof(int) * (*dim1));
hl_memcpy_device2host(
*view_m_data, gpuVec->getData(), sizeof(int) * (*dim1));
} else {
LOG(INFO) << "Unexpected situation";
}
......@@ -201,8 +202,10 @@ Vector* Vector::createByPaddleVectorPtr(void* ptr) {
}
}
Vector* Vector::createVectorFromNumpy(float* data, int dim, bool copy,
bool useGpu) throw (UnsupportError){
Vector* Vector::createVectorFromNumpy(float* data,
int dim,
bool copy,
bool useGpu) throw(UnsupportError) {
if (useGpu) {
/// if use gpu only copy=True is supported
if (!copy) {
......@@ -251,8 +254,8 @@ void Vector::copyToNumpyArray(float** view_m_data, int* dim1) {
if (auto cpuVec = dynamic_cast<paddle::CpuVector*>(m->vec.get())) {
std::memcpy(*view_m_data, cpuVec->getData(), sizeof(float) * (*dim1));
} else if (auto gpuVec = dynamic_cast<paddle::CpuVector*>(m->vec.get())) {
hl_memcpy_device2host(*view_m_data, gpuVec->getData(),
sizeof(float) * (*dim1));
hl_memcpy_device2host(
*view_m_data, gpuVec->getData(), sizeof(float) * (*dim1));
} else {
LOG(INFO) << "Unexpected situation";
}
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_ACTIVATION_FUNCTIONS_H_
#define HL_ACTIVATION_FUNCTIONS_H_
......@@ -21,11 +20,8 @@ limitations under the License. */
/**
* Active functions: sigmoid, relu, tanh and linear.
*/
#define HPPL_ACTIVE_FUNCTION {hppl::sigmoid, \
hppl::relu, \
hppl::tanh, \
hppl::linear \
}
#define HPPL_ACTIVE_FUNCTION \
{ hppl::sigmoid, hppl::relu, hppl::tanh, hppl::linear }
namespace hppl {
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_AGGREGATE_H_
#define HL_AGGREGATE_H_
......
......@@ -12,22 +12,21 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_AVX_FUNCTIONS_H_
#define HL_AVX_FUNCTIONS_H_
#include <immintrin.h>
namespace hppl {
__m256 relu(const __m256 a);
__m256 sigmoid(const __m256 a);
__m256 tanh(const __m256 a);
__m256 linear(const __m256 a);
__m256 relu(const __m256 a, const __m256 b);
__m256 sigmoid(const __m256 a, const __m256 b);
__m256 tanh(const __m256 a, const __m256 b);
__m256 linear(const __m256 a, const __m256 b);
__m256 relu(const __m256 a);
__m256 sigmoid(const __m256 a);
__m256 tanh(const __m256 a);
__m256 linear(const __m256 a);
__m256 relu(const __m256 a, const __m256 b);
__m256 sigmoid(const __m256 a, const __m256 b);
__m256 tanh(const __m256 a, const __m256 b);
__m256 linear(const __m256 a, const __m256 b);
} // namespace hppl
#endif // HL_AVX_FUNCTIONS_H_
......@@ -12,8 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_BASE_H_
#define HL_BASE_H_
......@@ -153,7 +151,6 @@ typedef enum {
HL_VALUE_END
} hl_matrix_value_t;
/**
* @brief HPPL matrix format.
*/
......@@ -163,8 +160,7 @@ typedef enum {
HL_SPARSE_END
} hl_matrix_format_t;
typedef struct _hl_matrix_s * hl_matrix_s;
typedef struct _hl_matrix_s *hl_matrix_s;
/**
* @brief HPPL sparse matrix.
......@@ -209,7 +205,6 @@ typedef struct {
#define HL_FLOAT_MIN 2.2250738585072014e-308
#endif
/**
* The maximum input value for exp, used to avoid overflow problem.
*
......@@ -217,14 +212,13 @@ typedef struct {
*/
#define EXP_MAX_INPUT 40.0
/**
* @brief DIVUP(x, y) is similar to ceil(x / y).
* @note For CUDA, DIVUP will be used to specify
* the size of blockDim.
*/
#ifndef DIVUP
#define DIVUP(x, y) (((x) + (y) - 1) / (y))
#define DIVUP(x, y) (((x) + (y)-1) / (y))
#endif
#ifdef __NVCC__
......@@ -244,11 +238,10 @@ extern __thread cudaStream_t default_stream;
#define CHECK_SYNC(msg) \
if (true == g_sync_flag) { \
hl_stream_synchronize(HPPL_STREAM_DEFAULT); \
cudaError_t err \
= (cudaError_t)hl_get_device_last_error(); \
CHECK_EQ(cudaSuccess, err) << "[" << msg << "] " \
<< "CUDA error: " \
<< hl_get_device_error_string((size_t)err); \
cudaError_t err = (cudaError_t)hl_get_device_last_error(); \
CHECK_EQ(cudaSuccess, err) \
<< "[" << msg << "] " \
<< "CUDA error: " << hl_get_device_error_string((size_t)err); \
}
#endif /* __NVCC__ */
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_BATCH_TRANSPOSE_H_
#define HL_BATCH_TRANSPOSE_H_
......@@ -31,10 +30,7 @@ limitations under the License. */
* order. Each batch has height * width data, which are
* arranged in height-first (or row-first) manner.
*/
extern void batchTranspose(const real* input,
real* output,
int width,
int height,
int batchSize);
extern void batchTranspose(
const real* input, real* output, int width, int height, int batchSize);
#endif // HL_BATCH_TRANSPOSE_H_
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CNN_H_
#define HL_CNN_H_
......@@ -37,15 +36,21 @@ limitations under the License. */
* @param[in] alpha
* @param[in] beta
*/
extern void hl_shrink_col2feature(
const real * dataCol, size_t channels,
size_t height, size_t width,
size_t blockH, size_t blockW,
size_t strideH, size_t strideW,
size_t paddingH, size_t paddingW,
size_t outputH, size_t outputW,
extern void hl_shrink_col2feature(const real* dataCol,
size_t channels,
size_t height,
size_t width,
size_t blockH,
size_t blockW,
size_t strideH,
size_t strideW,
size_t paddingH,
size_t paddingW,
size_t outputH,
size_t outputW,
real* dataIm,
real alpha = 1.0f, real beta = 0.0f);
real alpha = 1.0f,
real beta = 0.0f);
/**
* @brief Expand feature to column.
......@@ -65,13 +70,18 @@ extern void hl_shrink_col2feature(
* @param[out] dataCol expand data.
*
*/
extern void hl_expand_feature2col(
const real* dataIm, size_t channels,
size_t height, size_t width,
size_t blockH, size_t blockW,
size_t strideH, size_t strideW,
size_t paddingH, size_t paddingW,
size_t outputH, size_t outputW,
extern void hl_expand_feature2col(const real* dataIm,
size_t channels,
size_t height,
size_t width,
size_t blockH,
size_t blockW,
size_t strideH,
size_t strideW,
size_t paddingH,
size_t paddingW,
size_t outputH,
size_t outputW,
real* dataCol);
/**
......@@ -94,15 +104,21 @@ extern void hl_expand_feature2col(
* @param[in] tgtStride stride between output data samples.
*
*/
extern void hl_maxpool_forward(
const int frameCnt, const real* inputData,
extern void hl_maxpool_forward(const int frameCnt,
const real* inputData,
const int channels,
const int height, const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
const int paddingH, const int paddingW,
real* tgtData, const int tgtStride);
const int height,
const int width,
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
const int paddingH,
const int paddingW,
real* tgtData,
const int tgtStride);
/**
* @brief Maximum pool backward.
......@@ -128,17 +144,25 @@ extern void hl_maxpool_forward(
* @param[in] outStride stride between output data samples.
*
*/
extern void hl_maxpool_backward(
const int frameCnt, const real* inputData,
const real* outData, const real* outGrad,
const int channels, const int height,
extern void hl_maxpool_backward(const int frameCnt,
const real* inputData,
const real* outData,
const real* outGrad,
const int channels,
const int height,
const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
const int paddingH, const int paddingW,
real scaleA, real scaleB,
real* targetGrad, const int outStride);
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
const int paddingH,
const int paddingW,
real scaleA,
real scaleB,
real* targetGrad,
const int outStride);
/**
* @brief Averge pool forward.
......@@ -160,15 +184,21 @@ extern void hl_maxpool_backward(
* @param[in] tgtStride stride between output data samples.
*
*/
extern void hl_avgpool_forward(
const int frameCnt, const real* inputData,
extern void hl_avgpool_forward(const int frameCnt,
const real* inputData,
const int channels,
const int height, const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
const int paddingH, const int paddingW,
real* tgtData, const int tgtStride);
const int height,
const int width,
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
const int paddingH,
const int paddingW,
real* tgtData,
const int tgtStride);
/**
* @brief Maximum pool backward.
......@@ -192,16 +222,23 @@ extern void hl_avgpool_forward(
* @param[in] outStride stride between output data samples.
*
*/
extern void hl_avgpool_backward(
const int frameCnt, const real* outGrad,
const int channels, const int height,
extern void hl_avgpool_backward(const int frameCnt,
const real* outGrad,
const int channels,
const int height,
const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
int paddingH, int paddingW,
real scaleA, real scaleB,
real* backGrad, const int outStride);
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
int paddingH,
int paddingW,
real scaleA,
real scaleB,
real* backGrad,
const int outStride);
/**
* @brief Cross-map-respose normalize forward.
......@@ -218,10 +255,16 @@ extern void hl_avgpool_backward(
* @param[in] beta scale.
*
*/
extern void hl_CMRNorm_forward(
size_t frameCnt, const real* in, real* scale, real* out,
size_t channels, size_t height, size_t width, size_t sizeX,
real alpha, real beta);
extern void hl_CMRNorm_forward(size_t frameCnt,
const real* in,
real* scale,
real* out,
size_t channels,
size_t height,
size_t width,
size_t sizeX,
real alpha,
real beta);
/**
* @brief Cross-map-respose normalize backward.
......@@ -240,11 +283,18 @@ extern void hl_CMRNorm_forward(
* @param[in] beta scale.
*
*/
extern void hl_CMRNorm_backward(
size_t frameCnt, const real* inV, const real* scale,
const real* outV, const real* outDiff, real *inDiff,
size_t channels, size_t height, size_t width, size_t sizeX,
real alpha, real beta);
extern void hl_CMRNorm_backward(size_t frameCnt,
const real* inV,
const real* scale,
const real* outV,
const real* outDiff,
real* inDiff,
size_t channels,
size_t height,
size_t width,
size_t sizeX,
real alpha,
real beta);
/**
* @brief Bilinear interpolation forward.
......@@ -278,24 +328,24 @@ extern void hl_bilinear_forward(const real* inData,
const real ratioH,
const real ratioW);
/**
* @brief Bilinear interpolation backward.
*
* @param[out] inGrad input gradient.
* @param[in] inImgH input image height.
* @param[in] inImgW input image width.
* @param[in] inputH input batchSize.
* @param[in] inputW input image data dim.
* @param[in] outGrad output gradient.
* @param[in] outImgH output image height.
* @param[in] outImgW output image width.
* @param[in] outputH output batchSize.
* @param[in] outputW output image data dim.
* @param[in] numChannels number of channels.
* @param[in] ratioH inImgH / outImgH.
* @param[in] ratioW inImgW / outImgW.
*
*/
/**
* @brief Bilinear interpolation backward.
*
* @param[out] inGrad input gradient.
* @param[in] inImgH input image height.
* @param[in] inImgW input image width.
* @param[in] inputH input batchSize.
* @param[in] inputW input image data dim.
* @param[in] outGrad output gradient.
* @param[in] outImgH output image height.
* @param[in] outImgW output image width.
* @param[in] outputH output batchSize.
* @param[in] outputW output image data dim.
* @param[in] numChannels number of channels.
* @param[in] ratioH inImgH / outImgH.
* @param[in] ratioW inImgW / outImgW.
*
*/
extern void hl_bilinear_backward(real* inGrad,
const size_t inImgH,
const size_t inImgW,
......@@ -321,9 +371,13 @@ extern void hl_bilinear_backward(real* inGrad,
* @param[in] featLen feature length = image height * image width.
* @param[in] groups number of groups.
*/
extern void hl_maxout_forward(
const real* inData, real* outData, int* idData,
size_t batchSize, size_t size, size_t featLen, size_t groups);
extern void hl_maxout_forward(const real* inData,
real* outData,
int* idData,
size_t batchSize,
size_t size,
size_t featLen,
size_t groups);
/**
* @brief MaxOut backward.
......@@ -336,8 +390,12 @@ extern void hl_maxout_forward(
* @param[in] featLen feature length = image height * image width.
* @param[in] groups number of groups.
*/
extern void hl_maxout_backward(
real* inGrad, const real* outGrad, const int* idData,
size_t batchSize, size_t size, size_t featLen, size_t groups);
extern void hl_maxout_backward(real* inGrad,
const real* outGrad,
const int* idData,
size_t batchSize,
size_t size,
size_t featLen,
size_t groups);
#endif /* HL_CNN_H_ */
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CUDA_H_
#define HL_CUDA_H_
......@@ -22,8 +21,7 @@ limitations under the License. */
/**
* @brief HPPL event.
*/
typedef struct _hl_event_st * hl_event_t;
typedef struct _hl_event_st *hl_event_t;
/**
* @brief return cuda runtime api version.
......@@ -42,7 +40,7 @@ extern void hl_start();
* if device is NULL, will start all GPU.
* @param[in] number number of devices.
*/
extern void hl_specify_devices_start(int* device, int number);
extern void hl_specify_devices_start(int *device, int number);
/**
* @brief Queries if a device may directly access a peer device's memory.
......@@ -126,7 +124,7 @@ extern int hl_get_device();
*
* @return dest_d pointer to device memory.
*/
extern void* hl_malloc_device(size_t size);
extern void *hl_malloc_device(size_t size);
/**
* @brief Free device memory.
......@@ -143,7 +141,7 @@ extern void hl_free_mem_device(void *dest_d);
*
* @return dest_h pointer to host memory.
*/
extern void* hl_malloc_host(size_t size);
extern void *hl_malloc_host(size_t size);
/**
* @brief Free host page-lock memory.
......@@ -261,8 +259,7 @@ extern void hl_destroy_event(hl_event_t event);
*
* @return time Time between start and end in ms.
*/
extern float hl_event_elapsed_time(hl_event_t start,
hl_event_t end);
extern float hl_event_elapsed_time(hl_event_t start, hl_event_t end);
/**
* @brief Records an event.
......@@ -300,7 +297,7 @@ extern void hl_set_device_flags_block();
/**
* @brief Returns the last error string from a cuda runtime call.
*/
extern const char* hl_get_device_error_string();
extern const char *hl_get_device_error_string();
/**
* @brief Returns the last error string from a cuda runtime call.
......@@ -309,7 +306,7 @@ extern const char* hl_get_device_error_string();
*
* @see hl_get_device_last_error()
*/
extern const char* hl_get_device_error_string(size_t err);
extern const char *hl_get_device_error_string(size_t err);
/**
* @brief Returns the last error number.
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CUDA_CUBLAS_H_
#define HL_CUDA_CUBLAS_H_
......@@ -29,12 +28,8 @@ limitations under the License. */
* @param[in] ldc the first dimension of C_d.
*
*/
extern void hl_matrix_transpose(real *A_d,
real *C_d,
int dimM,
int dimN,
int lda,
int ldc);
extern void hl_matrix_transpose(
real *A_d, real *C_d, int dimM, int dimN, int lda, int ldc);
/*
* @brief Matrix transpose, while lda = dimN, ldc = dimM.
......@@ -45,10 +40,7 @@ extern void hl_matrix_transpose(real *A_d,
* @param[in] dimN matrix width.
*
*/
extern void hl_matrix_transpose(real *A_d,
real *C_d,
int dimM,
int dimN);
extern void hl_matrix_transpose(real *A_d, real *C_d, int dimM, int dimN);
/*
* @brief Matrix inverse
......@@ -60,11 +52,7 @@ extern void hl_matrix_transpose(real *A_d,
* @param[in] ldc the first dimension of C_d
*
*/
extern void hl_matrix_inverse(real *A_d,
real *C_d,
int dimN,
int lda,
int ldc);
extern void hl_matrix_inverse(real *A_d, real *C_d, int dimN, int lda, int ldc);
/**
* @brief C_d = alpha*(op(A_d) * op(B_d)) + beta*C_d
......@@ -84,12 +72,19 @@ extern void hl_matrix_inverse(real *A_d,
* @param[in] ldc the first dimension of C_d.
*
*/
extern void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
extern void hl_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta,
int lda, int ldb, int ldc);
int dimM,
int dimN,
int dimK,
real alpha,
real beta,
int lda,
int ldb,
int ldc);
/**
* @brief C_d = alpha*(op(A_d) * op(B_d)) + beta*C_d
......@@ -106,11 +101,16 @@ extern void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
* @param[in] beta scalar used for multiplication.
*
*/
extern void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
extern void hl_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta);
int dimM,
int dimN,
int dimK,
real alpha,
real beta);
/**
* @brief This function performs the matrix-vector multiplication.
......@@ -132,11 +132,17 @@ extern void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
*
*/
extern void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
real *B_d, real *C_d,
int dimM, int dimN,
real alpha, real beta,
int lda, int incb, int incc);
extern void hl_matrix_mul_vector(real *A_d,
hl_trans_op_t trans,
real *B_d,
real *C_d,
int dimM,
int dimN,
real alpha,
real beta,
int lda,
int incb,
int incc);
/**
* @brief This function performs the matrix-vector multiplication.
......@@ -154,9 +160,13 @@ extern void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
* @param[in] beta scalar used for multiplication.
*
*/
extern void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
real *B_d, real *C_d,
int dimM, int dimN,
real alpha, real beta);
extern void hl_matrix_mul_vector(real *A_d,
hl_trans_op_t trans,
real *B_d,
real *C_d,
int dimM,
int dimN,
real alpha,
real beta);
#endif /* HL_CUDA_CUBLAS_H_ */
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CUDA_CUDNN_H_
#define HL_CUDA_CUDNN_H_
......@@ -324,8 +323,7 @@ extern void hl_convolution_forward_add_bias(hl_tensor_descriptor bias,
* @param[in] sizeInBytes gpu workspace size (bytes).
* @param[in] convBwdFilterAlgo backward filter algorithm.
*/
extern void hl_convolution_backward_filter(
hl_tensor_descriptor input,
extern void hl_convolution_backward_filter(hl_tensor_descriptor input,
real* input_data,
hl_tensor_descriptor output,
real* output_grad_data,
......@@ -350,8 +348,7 @@ extern void hl_convolution_backward_filter(
* @param[in] sizeInBytes gpu workspace size (bytes).
* @param[in] convBwdDataAlgo backward data algorithm.
*/
extern void hl_convolution_backward_data(
hl_tensor_descriptor input,
extern void hl_convolution_backward_data(hl_tensor_descriptor input,
real* input_data_grad,
hl_tensor_descriptor output,
real* output_grad_data,
......@@ -383,8 +380,8 @@ extern void hl_convolution_backward_bias(hl_tensor_descriptor bias,
* @param[in] height matrix height.
* @param[in] width matrix width.
*/
extern void hl_softmax_forward(real *input,
real *output,
extern void hl_softmax_forward(real* input,
real* output,
int height,
int width);
......@@ -396,8 +393,8 @@ extern void hl_softmax_forward(real *input,
* @param[in] height matrix height.
* @param[in] width matrix width.
*/
extern void hl_softmax_backward(real *output_value,
real *output_grad,
extern void hl_softmax_backward(real* output_value,
real* output_grad,
int height,
int width);
......@@ -426,18 +423,18 @@ extern void hl_softmax_backward(real *output_value,
*
*/
extern void hl_batch_norm_forward_training(hl_tensor_descriptor inputDesc,
real *input,
real* input,
hl_tensor_descriptor outputDesc,
real *output,
real* output,
hl_tensor_descriptor bnParamDesc,
real *scale,
real *bias,
real* scale,
real* bias,
double factor,
real *runningMean,
real *runningInvVar,
real* runningMean,
real* runningInvVar,
double epsilon,
real *savedMean,
real *savedVar);
real* savedMean,
real* savedVar);
/**
* @brief cudnn batch norm forward.
......@@ -463,14 +460,14 @@ extern void hl_batch_norm_forward_training(hl_tensor_descriptor inputDesc,
*
*/
extern void hl_batch_norm_forward_inference(hl_tensor_descriptor inputDesc,
real *input,
real* input,
hl_tensor_descriptor outputDesc,
real *output,
real* output,
hl_tensor_descriptor bnParamDesc,
real *scale,
real *bias,
real *estimatedMean,
real *estimatedVar,
real* scale,
real* bias,
real* estimatedMean,
real* estimatedVar,
double epsilon);
/**
......@@ -483,7 +480,8 @@ extern void hl_batch_norm_forward_inference(hl_tensor_descriptor inputDesc,
* @param[in] inGradDesc input tensor descriptor desc.
* @param[in] inGrad input data.
* @param[in] dBnParamDesc tensor descriptor desc.
* bnScale, bnBias, running mean/var, save_mean/var.
* bnScale, bnBias, running mean/var,
* save_mean/var.
* @param[in] scale batch normalization scale parameter (in original
* paper scale is referred to as gamma).
* @param[in] scaleGrad batch normalization scale parameter (in original
......@@ -497,17 +495,17 @@ extern void hl_batch_norm_forward_inference(hl_tensor_descriptor inputDesc,
*
*/
extern void hl_batch_norm_backward(hl_tensor_descriptor inputDesc,
real *input,
real* input,
hl_tensor_descriptor outGradDesc,
real *outGrad,
real* outGrad,
hl_tensor_descriptor inGradDesc,
real *inGrad,
real* inGrad,
hl_tensor_descriptor dBnParamDesc,
real *scale,
real *scaleGrad,
real *biasGrad,
real* scale,
real* scaleGrad,
real* biasGrad,
double epsilon,
real *savedMean,
real *savedInvVar);
real* savedMean,
real* savedInvVar);
#endif // HL_CUDA_CUDNN_H_
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_DSO_LOADER_H_
#define HL_DSO_LOADER_H_
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_FUNCTIONS_H_
#define HL_FUNCTIONS_H_
......@@ -30,21 +29,21 @@ limitations under the License. */
#ifndef __NVCC__
namespace hppl {
/*
/*
* forward activation
*/
real relu(const real a);
real sigmoid(const real a);
real tanh(const real a);
real linear(const real a);
real relu(const real a);
real sigmoid(const real a);
real tanh(const real a);
real linear(const real a);
/*
/*
* backward activation
*/
real relu(const real a, const real b);
real sigmoid(const real a, const real b);
real tanh(const real a, const real b);
real linear(const real a, const real b);
real relu(const real a, const real b);
real sigmoid(const real a, const real b);
real tanh(const real a, const real b);
real linear(const real a, const real b);
} // namespace hppl
#ifdef __AVX__
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_GPU_H_
#define HL_GPU_H_
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_LSTM_H_
#define HL_LSTM_H_
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_MATRIX_H_
#define HL_MATRIX_H_
......@@ -30,13 +29,8 @@ limitations under the License. */
* @param[in] beta scalar used for addition.
*
*/
extern void hl_matrix_add(real* A_d,
real* B_d,
real* C_d,
int dimM,
int dimN,
real alpha,
real beta);
extern void hl_matrix_add(
real* A_d, real* B_d, real* C_d, int dimM, int dimN, real alpha, real beta);
/**
* @brief Matrix Softmax.
*
......@@ -46,7 +40,7 @@ extern void hl_matrix_add(real* A_d,
* @param[in] dimN matrix width.
*
*/
extern void hl_matrix_softmax(real *A_d, real *C_d, int dimM, int dimN);
extern void hl_matrix_softmax(real* A_d, real* C_d, int dimM, int dimN);
/**
* @brief Matrix softmax derivative.
......@@ -58,11 +52,8 @@ extern void hl_matrix_softmax(real *A_d, real *C_d, int dimM, int dimN);
* @param[in] dimN matrix width.
*
*/
extern void hl_matrix_softmax_derivative(real* grad_d,
real* output_d,
real* sftmaxSum_d,
int dimM,
int dimN);
extern void hl_matrix_softmax_derivative(
real* grad_d, real* output_d, real* sftmaxSum_d, int dimM, int dimN);
/**
* @brief Sequence softmax.
......@@ -73,8 +64,8 @@ extern void hl_matrix_softmax_derivative(real* grad_d,
* @param[in] numSequence sequence number.
*
*/
extern void hl_sequence_softmax_forward(real *A_d,
real *C_d,
extern void hl_sequence_softmax_forward(real* A_d,
real* C_d,
const int* index,
int numSequence);
......@@ -88,11 +79,8 @@ extern void hl_sequence_softmax_forward(real *A_d,
* @param[in] dimN matrix width.
*
*/
extern void hl_matrix_classification_error(real* A_d,
int* B_d,
real* C_d,
int dimM,
int dimN);
extern void hl_matrix_classification_error(
real* A_d, int* B_d, real* C_d, int dimM, int dimN);
/**
* @brief Matrix cross entropy.
......@@ -104,11 +92,8 @@ extern void hl_matrix_classification_error(real* A_d,
* @param[in] dimN matrix width.
*
*/
extern void hl_matrix_cross_entropy(real* A_d,
real* C_d,
int* label_d,
int dimM,
int dimN);
extern void hl_matrix_cross_entropy(
real* A_d, real* C_d, int* label_d, int dimM, int dimN);
/**
* @brief Matrix cross entropy back propagation.
......@@ -120,11 +105,8 @@ extern void hl_matrix_cross_entropy(real* A_d,
* @param[in] dimN matrix width.
*
*/
extern void hl_matrix_cross_entropy_bp(real* grad_d,
real* output_d,
int* label_d,
int dimM,
int dimN);
extern void hl_matrix_cross_entropy_bp(
real* grad_d, real* output_d, int* label_d, int dimM, int dimN);
/**
* @brief Matrix multi-binary label cross entropy
......@@ -135,11 +117,8 @@ extern void hl_matrix_cross_entropy_bp(real* grad_d,
* @param[in] dimM matrix height.
* @param[in] dimN matrix width.
*/
extern void hl_matrix_multi_binary_cross_entropy(real* output,
real* entropy,
hl_sparse_matrix_s mat,
int dimM,
int dimN);
extern void hl_matrix_multi_binary_cross_entropy(
real* output, real* entropy, hl_sparse_matrix_s mat, int dimM, int dimN);
/**
* @brief Matrix multi-binary label cross entropy backprop
......@@ -150,11 +129,8 @@ extern void hl_matrix_multi_binary_cross_entropy(real* output,
* @param[in] dimM matrix height.
* @param[in] dimN matrix width.
*/
extern void hl_matrix_multi_binary_cross_entropy_bp(real* output,
real* grad,
hl_sparse_matrix_s mat,
int dimM,
int dimN);
extern void hl_matrix_multi_binary_cross_entropy_bp(
real* output, real* grad, hl_sparse_matrix_s mat, int dimM, int dimN);
/**
* @brief Matrix zero memory.
......@@ -176,12 +152,8 @@ extern void hl_matrix_zero_mem(real* data, int num);
* @param[in] partial_sum
*/
extern void hl_param_relu_forward(real* output,
real* input,
real* w,
int width,
int height,
int partial_sum);
extern void hl_param_relu_forward(
real* output, real* input, real* w, int width, int height, int partial_sum);
/**
* @brief parameter relu backward w
*
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_SEQUENCE_H_
#define HL_SEQUENCE_H_
......@@ -32,7 +31,7 @@ limitations under the License. */
extern void hl_max_sequence_forward(real* input,
const int* sequence,
real* output,
int *index,
int* index,
int numSequences,
int dim);
......@@ -46,11 +45,8 @@ extern void hl_max_sequence_forward(real* input,
* @param[in] dim input dimension.
*
*/
extern void hl_max_sequence_backward(real* outputGrad,
int *index,
real* inputGrad,
int numSequences,
int dim);
extern void hl_max_sequence_backward(
real* outputGrad, int* index, real* inputGrad, int numSequences, int dim);
/**
* @brief Context projection forward.
......@@ -63,7 +59,8 @@ extern void hl_max_sequence_backward(real* outputGrad,
* @param[in] inputDim input sequence dimension.
* @param[in] contextLength context length.
* @param[in] contextStart context start.
* @param[in] beginPad number of extra timesteps added at the beginning.
* @param[in] beginPad number of extra timesteps added at the
* beginning.
* @param[in] isPadding trainable padding.
*
*/
......@@ -109,7 +106,8 @@ extern void hl_context_projection_backward_data(real* outputGrad,
* @param[in] totalPad number of extra timesteps.
* @param[in] contextLength context length.
* @param[in] contextStart context start.
* @param[in] beginPad number of extra timesteps added at the beginning.
* @param[in] beginPad number of extra timesteps added at the
* beginning.
*
*/
extern void hl_context_projection_backward_weight(real* outputGrad,
......@@ -141,9 +139,9 @@ extern void hl_context_projection_backward_weight(real* outputGrad,
* @param[in] seq2batch copy direction.
*
*/
extern void hl_sequence2batch_copy(real *batch,
real *sequence,
const int *batchIndex,
extern void hl_sequence2batch_copy(real* batch,
real* sequence,
const int* batchIndex,
int seqWidth,
int batchCount,
bool seq2batch);
......@@ -167,9 +165,9 @@ extern void hl_sequence2batch_copy(real *batch,
* @param[in] seq2batch copy direction.
*
*/
extern void hl_sequence2batch_add(real *batch,
real *sequence,
int *batchIndex,
extern void hl_sequence2batch_add(real* batch,
real* sequence,
int* batchIndex,
int seqWidth,
int batchCount,
bool seq2batch);
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_SPARSE_H_
#define HL_SPARSE_H_
......@@ -60,7 +59,7 @@ extern void hl_free_sparse_matrix(hl_sparse_matrix_s A_d);
*
*/
extern void hl_construct_sparse_matrix(hl_sparse_matrix_s *A_d,
void * dest_d,
void *dest_d,
size_t size,
hl_matrix_format_t format,
hl_matrix_value_t value_type,
......@@ -94,9 +93,9 @@ extern void hl_construct_sparse_matrix(hl_sparse_matrix_s *A_d,
*
*/
extern void hl_construct_sparse_matrix(hl_sparse_matrix_s *A_d,
real* value_d,
int* rows_d,
int* cols_d,
real *value_d,
int *rows_d,
int *cols_d,
hl_matrix_format_t format,
hl_matrix_value_t value_type,
int dimM,
......@@ -259,10 +258,14 @@ extern void hl_matrix_csr_mul_dense(hl_sparse_matrix_s A_d,
*/
extern void hl_matrix_csc_mul_dense(hl_sparse_matrix_s A_d,
hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta);
int dimM,
int dimN,
int dimK,
real alpha,
real beta);
/**
* @brief C_d = alpha*(op(A_d) * op(B_d)) + beta*C_d.
......@@ -311,11 +314,16 @@ extern void hl_matrix_dense_mul_csc(real *A_d,
* @note transb is not support HPPL_OP_T.
*
*/
extern void hl_sparse_matrix_mul(real* A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
extern void hl_sparse_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
hl_sparse_matrix_s C_d,
int dimM, int dimN, int dimK,
real alpha, real beta);
int dimM,
int dimN,
int dimK,
real alpha,
real beta);
/**
* @brief C_d = alpha*(op(A_d) * op(B_d)) + beta*C_d
......@@ -336,12 +344,16 @@ extern void hl_sparse_matrix_mul(real* A_d, hl_trans_op_t transa,
* @note transa is not support HPPL_OP_T.
*
*/
extern void hl_matrix_dense_mul_csr(real *A_d, hl_trans_op_t transa,
extern void hl_matrix_dense_mul_csr(real *A_d,
hl_trans_op_t transa,
hl_sparse_matrix_s B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta);
int dimM,
int dimN,
int dimK,
real alpha,
real beta);
/**
* @brief Memcpy csc_matrix to host.
......@@ -412,7 +424,6 @@ extern void hl_memcpy_from_csr_matrix(real *csr_val,
hl_sparse_matrix_s csr_matrix,
hl_stream_t stream);
/**
* @brief A_d[j] += B_d[i,j] for i in range(height)
*
......@@ -423,19 +434,13 @@ extern void hl_memcpy_from_csr_matrix(real *csr_val,
* @param[in] scale scale of B_d
*
*/
extern void hl_sparse_matrix_column_sum(real* A_d,
hl_sparse_matrix_s B_d,
int dimM,
int dimN,
real scale);
extern void hl_sparse_matrix_column_sum(
real *A_d, hl_sparse_matrix_s B_d, int dimM, int dimN, real scale);
/**
* @brief implementation of csr sparse matrix in hl_sparse_matirx_column_sum
*/
extern void hl_matrix_csr_column_sum(real* A_d,
hl_sparse_matrix_s B_d,
int dimM,
int dimN,
real scale);
extern void hl_matrix_csr_column_sum(
real *A_d, hl_sparse_matrix_s B_d, int dimM, int dimN, real scale);
/**
* @brief A_d[i,j] += B_d[j]
......@@ -446,13 +451,13 @@ extern void hl_matrix_csr_column_sum(real* A_d,
*
*/
extern void hl_sparse_matrix_add_bias(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
real scale);
/**
* @brief implementation of csr sparse matrix in hl_sparse_matrix_add_bias
*/
extern void hl_matrix_csr_add_bias(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
real scale);
/**
......@@ -470,7 +475,7 @@ extern void hl_matrix_csr_add_bias(hl_sparse_matrix_s A_d,
*
*/
extern void hl_sparse_matrix_add_dense(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
int dimM,
int dimN,
real alpha,
......@@ -479,7 +484,7 @@ extern void hl_sparse_matrix_add_dense(hl_sparse_matrix_s A_d,
* @brief implementation of csr sparse matrix in hl_sparse_matrix_add_dense
*/
extern void hl_matrix_csr_add_dense(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
int dimM,
int dimN,
real alpha,
......@@ -493,7 +498,7 @@ extern void hl_matrix_csr_add_dense(hl_sparse_matrix_s A_d,
* @return return rows pointer, which is gpu address
*
*/
extern int* hl_sparse_matrix_get_rows(hl_sparse_matrix_s sMat);
extern int *hl_sparse_matrix_get_rows(hl_sparse_matrix_s sMat);
/**
* @brief get cols pionter of GpuSparseMatrix
......@@ -503,7 +508,7 @@ extern int* hl_sparse_matrix_get_rows(hl_sparse_matrix_s sMat);
* @return return cols pointer, which is gpu address
*
*/
extern int* hl_sparse_matrix_get_cols(hl_sparse_matrix_s sMat);
extern int *hl_sparse_matrix_get_cols(hl_sparse_matrix_s sMat);
/**
* @brief get value pionter of GpuSparseMatrix
......@@ -513,7 +518,6 @@ extern int* hl_sparse_matrix_get_cols(hl_sparse_matrix_s sMat);
* @return return value pointer, which is gpu address
*
*/
extern real* hl_sparse_matrix_get_value(hl_sparse_matrix_s sMat);
extern real *hl_sparse_matrix_get_value(hl_sparse_matrix_s sMat);
#endif /* HL_SPARSE_H_ */
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_TABLE_APPLY_H_
#define HL_TABLE_APPLY_H_
......@@ -31,8 +30,10 @@ limitations under the License. */
* @param[in] dim width of table.
*
*/
extern void hl_matrix_select_rows(real* output, int ldo,
real* table, int ldt,
extern void hl_matrix_select_rows(real* output,
int ldo,
real* table,
int ldt,
int* ids,
int numSamples,
int tableSize,
......@@ -53,8 +54,10 @@ extern void hl_matrix_select_rows(real* output, int ldo,
* @param[in] dim width of table.
*
*/
extern void hl_matrix_add_to_rows(real* table, int ldt,
real* input, int ldi,
extern void hl_matrix_add_to_rows(real* table,
int ldt,
real* input,
int ldi,
int* ids,
int numSamples,
int tableSize,
......@@ -72,8 +75,7 @@ extern void hl_matrix_add_to_rows(real* table, int ldt,
*
*/
template <class T>
extern void hl_vector_select_from(T* dst, int sized,
const T* src, int sizes,
const int* ids, int sizei);
extern void hl_vector_select_from(
T* dst, int sized, const T* src, int sizes, const int* ids, int sizei);
#endif /* HL_TABLE_APPLY_H_ */
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_TIME_H_
#define HL_TIME_H_
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_TOP_K_H_
#define HL_TOP_K_H_
......@@ -31,9 +30,11 @@ limitations under the License. */
* @param[in] numSamples height of input value.
*
*/
extern void hl_matrix_top_k(real* topVal, int ldv,
int * topIds,
real* src, int lds,
extern void hl_matrix_top_k(real* topVal,
int ldv,
int* topIds,
real* src,
int lds,
int dim,
int beamSize,
int numSamples);
......@@ -50,8 +51,9 @@ extern void hl_matrix_top_k(real* topVal, int ldv,
*
* @note Only support HL_SPARSE_CSR format.
*/
extern void hl_sparse_matrix_top_k(real* topVal, int ldv,
int * topIds,
extern void hl_sparse_matrix_top_k(real* topVal,
int ldv,
int* topIds,
hl_sparse_matrix_s src,
int beamSize,
int numSamples);
......
......@@ -12,29 +12,22 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_AGGREGATE_STUB_H_
#define HL_AGGREGATE_STUB_H_
#include "hl_aggregate.h"
inline void hl_matrix_row_sum(real *A_d, real *C_d,
int dimM, int dimN) {}
inline void hl_matrix_row_sum(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_row_max(real *A_d, real *C_d,
int dimM, int dimN) {}
inline void hl_matrix_row_max(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_row_min(real *A_d, real *C_d,
int dimM, int dimN) {}
inline void hl_matrix_row_min(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_column_sum(real *A_d, real *C_d,
int dimM, int dimN) {}
inline void hl_matrix_column_sum(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_column_max(real *A_d, real *C_d,
int dimM, int dimN) {}
inline void hl_matrix_column_max(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_column_min(real *A_d, real *C_d,
int dimM, int dimN) {}
inline void hl_matrix_column_min(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_vector_sum(real *A_d, real *C_h, int dimM) {}
......
......@@ -12,84 +12,134 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CNN_STUB_H_
#define HL_CNN_STUB_H_
#include "hl_cnn.h"
inline void hl_shrink_col2feature(
const real * dataCol, size_t channels,
size_t height, size_t width,
size_t blockH, size_t blockW,
size_t strideH, size_t strideW,
size_t paddingH, size_t paddingW,
size_t outputH, size_t outputW,
inline void hl_shrink_col2feature(const real* dataCol,
size_t channels,
size_t height,
size_t width,
size_t blockH,
size_t blockW,
size_t strideH,
size_t strideW,
size_t paddingH,
size_t paddingW,
size_t outputH,
size_t outputW,
real* dataIm,
real alpha, real beta) {}
inline void hl_expand_feature2col(
const real* dataIm, size_t channels,
size_t height, size_t width,
size_t blockH, size_t blockW,
size_t strideH, size_t strideW,
size_t paddingH, size_t paddingW,
size_t outputH, size_t outputW,
real alpha,
real beta) {}
inline void hl_expand_feature2col(const real* dataIm,
size_t channels,
size_t height,
size_t width,
size_t blockH,
size_t blockW,
size_t strideH,
size_t strideW,
size_t paddingH,
size_t paddingW,
size_t outputH,
size_t outputW,
real* dataCol) {}
inline void hl_maxpool_forward(
const int frameCnt, const real* inputData,
inline void hl_maxpool_forward(const int frameCnt,
const real* inputData,
const int channels,
const int height, const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
const int paddingH, const int paddingW,
real* tgtData, const int tgtStride) {}
inline void hl_maxpool_backward(
const int frameCnt, const real* inputData,
const real* outData, const real* outGrad,
const int channels, const int height,
const int height,
const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
const int paddingH, const int paddingW,
real scaleA, real scaleB,
real* targetGrad, const int outStride) {}
inline void hl_avgpool_forward(
const int frameCnt, const real* inputData,
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
const int paddingH,
const int paddingW,
real* tgtData,
const int tgtStride) {}
inline void hl_maxpool_backward(const int frameCnt,
const real* inputData,
const real* outData,
const real* outGrad,
const int channels,
const int height, const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
const int paddingH, const int paddingW,
real* tgtData, const int tgtStride) {}
inline void hl_avgpool_backward(
const int frameCnt, const real* outGrad,
const int channels, const int height,
const int height,
const int width,
const int pooledH, const int pooledW,
const int sizeX, const int sizeY,
const int strideH, const int strideW,
int paddingH, int paddingW,
real scaleA, real scaleB,
real* backGrad, const int outStride) {}
inline void hl_CMRNorm_forward(
size_t frameCnt, const real* in, real* scale, real* out,
size_t channels, size_t height, size_t width, size_t sizeX,
real alpha, real beta) {}
inline void hl_CMRNorm_backward(
size_t frameCnt, const real* inV, const real* scale,
const real* outV, const real* outDiff, real *inDiff,
size_t channels, size_t height, size_t width, size_t sizeX,
real alpha, real beta) {}
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
const int paddingH,
const int paddingW,
real scaleA,
real scaleB,
real* targetGrad,
const int outStride) {}
inline void hl_avgpool_forward(const int frameCnt,
const real* inputData,
const int channels,
const int height,
const int width,
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
const int paddingH,
const int paddingW,
real* tgtData,
const int tgtStride) {}
inline void hl_avgpool_backward(const int frameCnt,
const real* outGrad,
const int channels,
const int height,
const int width,
const int pooledH,
const int pooledW,
const int sizeX,
const int sizeY,
const int strideH,
const int strideW,
int paddingH,
int paddingW,
real scaleA,
real scaleB,
real* backGrad,
const int outStride) {}
inline void hl_CMRNorm_forward(size_t frameCnt,
const real* in,
real* scale,
real* out,
size_t channels,
size_t height,
size_t width,
size_t sizeX,
real alpha,
real beta) {}
inline void hl_CMRNorm_backward(size_t frameCnt,
const real* inV,
const real* scale,
const real* outV,
const real* outDiff,
real* inDiff,
size_t channels,
size_t height,
size_t width,
size_t sizeX,
real alpha,
real beta) {}
inline void hl_bilinear_forward(const real* inData,
const size_t inImgH,
......@@ -119,12 +169,20 @@ inline void hl_bilinear_backward(real* inGrad,
const real ratioH,
const real ratioW) {}
inline void hl_maxout_forward(
const real* inData, real* outData, int* idData,
size_t batchSize, size_t size, size_t featLen, size_t group) {}
inline void hl_maxout_forward(const real* inData,
real* outData,
int* idData,
size_t batchSize,
size_t size,
size_t featLen,
size_t group) {}
inline void hl_maxout_backward(
real* inGrad, const real* outGrad, const int* idData,
size_t batchSize, size_t size, size_t featLen, size_t group) {}
inline void hl_maxout_backward(real* inGrad,
const real* outGrad,
const int* idData,
size_t batchSize,
size_t size,
size_t featLen,
size_t group) {}
#endif // HL_CNN_STUB_H_
......@@ -12,41 +12,42 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CUDA_CUBLAS_STUB_H_
#define HL_CUDA_CUBLAS_STUB_H_
#include "hl_cuda_cublas.h"
inline void hl_matrix_transpose(real *A_d,
inline void hl_matrix_transpose(
real *A_d, real *C_d, int dimM, int dimN, int lda, int ldc) {}
inline void hl_matrix_transpose(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_inverse(
real *A_d, real *C_d, int dimN, int lda, int ldc) {}
inline void hl_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM,
int dimN,
int dimK,
real alpha,
real beta,
int lda,
int ldb,
int ldc) {}
inline void hl_matrix_transpose(real *A_d,
inline void hl_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM,
int dimN) {}
inline void hl_matrix_inverse(real *A_d,
real *C_d,
int dimN,
int lda,
int ldc) {}
inline void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta,
int lda, int ldb, int ldc) {}
inline void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta) {}
int dimK,
real alpha,
real beta) {}
#endif // HL_CUDA_CUBLAS_STUB_H_
......@@ -12,15 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CUDA_CUDNN_STUB_H_
#define HL_CUDA_CUDNN_STUB_H_
#include "hl_cuda_cudnn.h"
inline int hl_get_cudnn_lib_version() {
return 0;
}
inline int hl_get_cudnn_lib_version() { return 0; }
inline void hl_create_tensor_descriptor(hl_tensor_descriptor* image_desc) {}
......@@ -120,8 +117,7 @@ inline void hl_convolution_forward_add_bias(hl_tensor_descriptor bias,
hl_tensor_descriptor output,
real* output_data) {}
inline void hl_convolution_backward_filter(
hl_tensor_descriptor input,
inline void hl_convolution_backward_filter(hl_tensor_descriptor input,
real* input_data,
hl_tensor_descriptor output,
real* output_grad_data,
......@@ -132,8 +128,7 @@ inline void hl_convolution_backward_filter(
size_t sizeInBytes,
int convBwdFilterAlgo) {}
inline void hl_convolution_backward_data(
hl_tensor_descriptor input,
inline void hl_convolution_backward_data(hl_tensor_descriptor input,
real* input_data_grad,
hl_tensor_descriptor output,
real* output_grad_data,
......@@ -149,53 +144,53 @@ inline void hl_convolution_backward_bias(hl_tensor_descriptor bias,
hl_tensor_descriptor output,
real* output_grad_data) {}
inline void hl_softmax_forward(real *input,
real *output,
inline void hl_softmax_forward(real* input,
real* output,
int height,
int width) {}
inline void hl_softmax_backward(real *output_value,
real *output_grad,
inline void hl_softmax_backward(real* output_value,
real* output_grad,
int height,
int width) {}
inline void hl_batch_norm_forward_training(hl_tensor_descriptor inputDesc,
real *input,
real* input,
hl_tensor_descriptor outputDesc,
real *output,
real* output,
hl_tensor_descriptor bnParamDesc,
real *scale,
real *bias,
real* scale,
real* bias,
double factor,
real *runningMean,
real *runningInvVar,
real* runningMean,
real* runningInvVar,
double epsilon,
real *savedMean,
real *savedVar) {}
real* savedMean,
real* savedVar) {}
inline void hl_batch_norm_forward_inference(hl_tensor_descriptor inputDesc,
real *input,
real* input,
hl_tensor_descriptor outputDesc,
real *output,
real* output,
hl_tensor_descriptor bnParamDesc,
real *scale,
real *bias,
real *estimatedMean,
real *estimatedVar,
real* scale,
real* bias,
real* estimatedMean,
real* estimatedVar,
double epsilon) {}
inline void hl_batch_norm_backward(hl_tensor_descriptor inputDesc,
real *input,
real* input,
hl_tensor_descriptor outGradDesc,
real *outGrad,
real* outGrad,
hl_tensor_descriptor inGradDesc,
real *inGrad,
real* inGrad,
hl_tensor_descriptor dBnParamDesc,
real *scale,
real *scaleGrad,
real *biasGrad,
real* scale,
real* scaleGrad,
real* biasGrad,
double epsilon,
real *savedMean,
real *savedInvVar) {}
real* savedMean,
real* savedInvVar) {}
#endif // HL_CUDA_CUDNN_STUB_H_
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_CUDA_STUB_H_
#define HL_CUDA_STUB_H_
......@@ -24,17 +23,13 @@ inline void hl_specify_devices_start(int *device, int number) {}
inline void hl_init(int device) {}
inline int hl_get_cuda_lib_version(int device) {
return 0;
}
inline int hl_get_cuda_lib_version(int device) { return 0; }
inline void hl_fini() {}
inline void hl_set_sync_flag(bool flag) {}
inline bool hl_get_sync_flag() {
return false;
}
inline bool hl_get_sync_flag() { return false; }
inline int hl_get_device_count() { return 0; }
......@@ -42,11 +37,11 @@ inline void hl_set_device(int device) {}
inline int hl_get_device() { return 0; }
inline void* hl_malloc_device(size_t size) { return NULL; }
inline void *hl_malloc_device(size_t size) { return NULL; }
inline void hl_free_mem_device(void *dest_d) {}
inline void* hl_malloc_host(size_t size) { return NULL; }
inline void *hl_malloc_host(size_t size) { return NULL; }
inline void hl_free_mem_host(void *dest_h) {}
......@@ -64,7 +59,9 @@ inline void hl_rand(real *dest_d, size_t num) {}
inline void hl_srand(unsigned int seed) {}
inline void hl_memcpy_async(void *dst, void *src, size_t size,
inline void hl_memcpy_async(void *dst,
void *src,
size_t size,
hl_stream_t stream) {}
inline void hl_stream_synchronize(hl_stream_t stream) {}
......@@ -85,9 +82,9 @@ inline void hl_event_synchronize(hl_event_t event) {}
inline int hl_get_device_last_error() { return 0; }
inline const char* hl_get_device_error_string() { return NULL; }
inline const char *hl_get_device_error_string() { return NULL; }
inline const char* hl_get_device_error_string(size_t err) { return NULL; }
inline const char *hl_get_device_error_string(size_t err) { return NULL; }
inline bool hl_cuda_event_is_ready(hl_event_t event) { return true; }
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_LSTM_STUB_H_
#define HL_LSTM_STUB_H_
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_MATRIX_STUB_H_
#define HL_MATRIX_STUB_H_
......@@ -26,48 +25,30 @@ inline void hl_matrix_add(real* A_d,
real alpha,
real beta) {}
inline void hl_matrix_softmax(real *A_d, real *C_d, int dimM, int dimN) {}
inline void hl_matrix_softmax(real* A_d, real* C_d, int dimM, int dimN) {}
inline void hl_sequence_softmax_forward(real *A_d,
real *C_d,
inline void hl_sequence_softmax_forward(real* A_d,
real* C_d,
const int* index,
int numSequence) {}
inline void hl_matrix_softmax_derivative(real* grad_d,
real* output_d,
real* sftmaxSum_d,
int dimM,
int dimN) {}
inline void hl_matrix_softmax_derivative(
real* grad_d, real* output_d, real* sftmaxSum_d, int dimM, int dimN) {}
inline void hl_matrix_classification_error(real* A_d,
int* B_d,
real* C_d,
int dimM,
int dimN) {}
inline void hl_matrix_classification_error(
real* A_d, int* B_d, real* C_d, int dimM, int dimN) {}
inline void hl_matrix_cross_entropy(real* A_d,
real* C_d,
int* label_d,
int dimM,
int dimN) {}
inline void hl_matrix_cross_entropy(
real* A_d, real* C_d, int* label_d, int dimM, int dimN) {}
inline void hl_matrix_cross_entropy_bp(real* grad_d,
real* output_d,
int* label_d,
int dimM,
int dimN) {}
inline void hl_matrix_cross_entropy_bp(
real* grad_d, real* output_d, int* label_d, int dimM, int dimN) {}
inline void hl_matrix_multi_binary_cross_entropy(real* output,
real* entropy,
hl_sparse_matrix_s mat,
int dimM,
int dimN) {}
inline void hl_matrix_multi_binary_cross_entropy(
real* output, real* entropy, hl_sparse_matrix_s mat, int dimM, int dimN) {}
inline void hl_matrix_multi_binary_cross_entropy_bp(real* output,
real* grad,
hl_sparse_matrix_s mat,
int dimM,
int dimN) {}
inline void hl_matrix_multi_binary_cross_entropy_bp(
real* output, real* grad, hl_sparse_matrix_s mat, int dimM, int dimN) {}
inline void hl_matrix_zero_mem(real* data, int num) {}
......@@ -101,7 +82,6 @@ inline void hl_cossim(real* output,
int input2_height,
real scale) {}
inline void hl_cossim_derivative(real* grad,
real* output,
real* prevOutX,
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_SEQUENCE_STUB_H_
#define HL_SEQUENCE_STUB_H_
......@@ -21,15 +20,12 @@ limitations under the License. */
inline void hl_max_sequence_forward(real* input,
const int* sequence,
real* output,
int *index,
int* index,
int numSequences,
int dim) {}
inline void hl_max_sequence_backward(real* outputGrad,
int *index,
real* inputGrad,
int numSequences,
int dim) {}
inline void hl_max_sequence_backward(
real* outputGrad, int* index, real* inputGrad, int numSequences, int dim) {}
inline void hl_context_projection_forward(real* input,
const int* sequence,
......@@ -60,16 +56,16 @@ inline void hl_context_projection_backward_weight(real* outputGrad,
int contextStart,
int beginPad) {}
inline void hl_sequence2batch_copy(real *batch,
real *sequence,
const int *batchIndex,
inline void hl_sequence2batch_copy(real* batch,
real* sequence,
const int* batchIndex,
int seqWidth,
int batchCount,
bool seq2batch) {}
inline void hl_sequence2batch_add(real *batch,
real *sequence,
int *batchIndex,
inline void hl_sequence2batch_add(real* batch,
real* sequence,
int* batchIndex,
int seqWidth,
int batchCount,
bool seq2batch) {}
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifndef HL_SPARSE_STUB_H_
#define HL_SPARSE_STUB_H_
......@@ -28,7 +27,7 @@ inline void hl_malloc_sparse_matrix(hl_sparse_matrix_s *A_d,
inline void hl_free_sparse_matrix(hl_sparse_matrix_s A_d) {}
inline void hl_construct_sparse_matrix(hl_sparse_matrix_s *A_d,
void * dest_d,
void *dest_d,
size_t size,
hl_matrix_format_t format,
hl_matrix_value_t value_type,
......@@ -37,9 +36,9 @@ inline void hl_construct_sparse_matrix(hl_sparse_matrix_s *A_d,
int nnz) {}
inline void hl_construct_sparse_matrix(hl_sparse_matrix_s *A_d,
real* value_d,
int* rows_d,
int* cols_d,
real *value_d,
int *rows_d,
int *cols_d,
hl_matrix_format_t format,
hl_matrix_value_t value_type,
int dimM,
......@@ -87,10 +86,14 @@ inline void hl_matrix_csr_mul_dense(hl_sparse_matrix_s A_d,
inline void hl_matrix_csc_mul_dense(hl_sparse_matrix_s A_d,
hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta) {}
int dimM,
int dimN,
int dimK,
real alpha,
real beta) {}
inline void hl_matrix_dense_mul_csc(real *A_d,
hl_trans_op_t transa,
......@@ -103,18 +106,27 @@ inline void hl_matrix_dense_mul_csc(real *A_d,
real alpha,
real beta) {}
inline void hl_sparse_matrix_mul(real* A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
inline void hl_sparse_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
hl_sparse_matrix_s C_d,
int dimM, int dimN, int dimK,
real alpha, real beta) {}
int dimM,
int dimN,
int dimK,
real alpha,
real beta) {}
inline void hl_matrix_dense_mul_csr(real *A_d, hl_trans_op_t transa,
inline void hl_matrix_dense_mul_csr(real *A_d,
hl_trans_op_t transa,
hl_sparse_matrix_s B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta) {}
int dimM,
int dimN,
int dimK,
real alpha,
real beta) {}
inline void hl_memcpy_from_csc_matrix(real *csc_val,
size_t val_size,
......@@ -134,49 +146,39 @@ inline void hl_memcpy_from_csr_matrix(real *csr_val,
hl_sparse_matrix_s csr_matrix,
hl_stream_t stream) {}
inline void hl_sparse_matrix_column_sum(real* A_d,
hl_sparse_matrix_s B_d,
int dimM,
int dimN,
real scale) {}
inline void hl_sparse_matrix_column_sum(
real *A_d, hl_sparse_matrix_s B_d, int dimM, int dimN, real scale) {}
inline void hl_matrix_csr_column_sum(real* A_d,
hl_sparse_matrix_s B_d,
int dimM,
int dimN,
real scale) {}
inline void hl_matrix_csr_column_sum(
real *A_d, hl_sparse_matrix_s B_d, int dimM, int dimN, real scale) {}
inline void hl_sparse_matrix_add_bias(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
real scale) {}
inline void hl_matrix_csr_add_bias(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
real scale) {}
inline void hl_sparse_matrix_add_dense(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
int dimM,
int dimN,
real alpha,
real beta) {}
inline void hl_matrix_csr_add_dense(hl_sparse_matrix_s A_d,
real* B_d,
real *B_d,
int dimM,
int dimN,
real alpha,
real beta) {}
inline int* hl_sparse_matrix_get_rows(hl_sparse_matrix_s sMat) {
return NULL;
}
inline int *hl_sparse_matrix_get_rows(hl_sparse_matrix_s sMat) { return NULL; }
inline int* hl_sparse_matrix_get_cols(hl_sparse_matrix_s sMat) {
return NULL;
}
inline int *hl_sparse_matrix_get_cols(hl_sparse_matrix_s sMat) { return NULL; }
inline real* hl_sparse_matrix_get_value(hl_sparse_matrix_s sMat) {
inline real *hl_sparse_matrix_get_value(hl_sparse_matrix_s sMat) {
return NULL;
}
......
此差异已折叠。
......@@ -12,20 +12,19 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include <immintrin.h>
#include "hl_functions.h"
namespace hppl {
extern __m256 exp(__m256 a);
extern __m256 exp(__m256 a);
__m256 relu(const __m256 a) {
__m256 relu(const __m256 a) {
__m256 tmp = _mm256_set1_ps(0.0f);
return _mm256_max_ps(a, tmp);
}
}
__m256 sigmoid(const __m256 a) {
__m256 sigmoid(const __m256 a) {
__m256 max = _mm256_set1_ps(SIGMOID_THRESHOLD_MAX);
__m256 min = _mm256_set1_ps(SIGMOID_THRESHOLD_MIN);
__m256 tmp = _mm256_max_ps(a, min);
......@@ -35,39 +34,36 @@ namespace hppl {
tmp = _mm256_add_ps(_mm256_set1_ps(1.0f), tmp);
tmp = _mm256_div_ps(_mm256_set1_ps(1.0f), tmp);
return tmp;
}
}
__m256 tanh(const __m256 a) {
__m256 tanh(const __m256 a) {
__m256 max = _mm256_set1_ps(EXP_MAX_INPUT);
__m256 tmp = _mm256_mul_ps(_mm256_set1_ps(-2.0f), a);
tmp = _mm256_min_ps(tmp, max);
tmp = exp(tmp);
return _mm256_sub_ps(
_mm256_div_ps(_mm256_set1_ps(2.0f),
_mm256_add_ps(_mm256_set1_ps(1.0f), tmp)), _mm256_set1_ps(1.0f));
}
return _mm256_sub_ps(_mm256_div_ps(_mm256_set1_ps(2.0f),
_mm256_add_ps(_mm256_set1_ps(1.0f), tmp)),
_mm256_set1_ps(1.0f));
}
__m256 linear(const __m256 a) {
return a;
}
__m256 linear(const __m256 a) { return a; }
__m256 relu(const __m256 a, const __m256 b) {
return _mm256_mul_ps(a,
__m256 relu(const __m256 a, const __m256 b) {
return _mm256_mul_ps(
a,
_mm256_and_ps(_mm256_cmp_ps(b, _mm256_set1_ps(0.0f), _CMP_GT_OS),
_mm256_set1_ps(1.0f)));
}
}
__m256 sigmoid(const __m256 a, const __m256 b) {
__m256 sigmoid(const __m256 a, const __m256 b) {
return _mm256_mul_ps(_mm256_mul_ps(a, b),
_mm256_sub_ps(_mm256_set1_ps(1.0f), b));
}
}
__m256 tanh(const __m256 a, const __m256 b) {
return _mm256_mul_ps(a,
_mm256_sub_ps(_mm256_set1_ps(1.0f), _mm256_mul_ps(b, b)));
}
__m256 tanh(const __m256 a, const __m256 b) {
return _mm256_mul_ps(
a, _mm256_sub_ps(_mm256_set1_ps(1.0f), _mm256_mul_ps(b, b)));
}
__m256 linear(const __m256 a, const __m256 b) {
return a;
}
__m256 linear(const __m256 a, const __m256 b) { return a; }
} // namespace hppl
......@@ -12,46 +12,33 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include <math.h>
#include "hl_functions.h"
namespace hppl {
real relu(const real a) {
return a > 0.0f ? a : 0.0f;
}
real relu(const real a) { return a > 0.0f ? a : 0.0f; }
real sigmoid(const real a) {
real sigmoid(const real a) {
const real min = SIGMOID_THRESHOLD_MIN;
const real max = SIGMOID_THRESHOLD_MAX;
real tmp = (a < min) ? min : ((a > max) ? max : a);
return 1.0 / (1.0 + exp(-tmp));
}
}
real tanh(const real a) {
real tanh(const real a) {
real tmp = -2.0 * a;
tmp = (tmp > EXP_MAX_INPUT) ? EXP_MAX_INPUT : tmp;
return (2.0 / (1.0 + exp(tmp))) - 1.0;
}
}
real linear(const real a) {
return a;
}
real linear(const real a) { return a; }
real relu(const real a, const real b) {
return a * (b > 0.0f ? 1.0f : 0.0f);
}
real relu(const real a, const real b) { return a * (b > 0.0f ? 1.0f : 0.0f); }
real sigmoid(const real a, const real b) {
return a * b * (1 - b);
}
real sigmoid(const real a, const real b) { return a * b * (1 - b); }
real tanh(const real a, const real b) {
return a * (1.0f - b * b);
}
real tanh(const real a, const real b) { return a * (1.0f - b * b); }
real linear(const real a, const real b) {
return a;
}
real linear(const real a, const real b) { return a; }
} // namespace hppl
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include <sys/time.h>
#include <mutex>
#include "hl_cuda.h"
......@@ -24,7 +23,7 @@ limitations under the License. */
namespace dynload {
std::once_flag cublas_dso_flag;
void* cublas_dso_handle = nullptr;
void *cublas_dso_handle = nullptr;
/**
* The following macro definition can generate structs
......@@ -39,9 +38,8 @@ void* cublas_dso_handle = nullptr;
template <typename... Args> \
cublasStatus_t operator()(Args... args) { \
typedef cublasStatus_t (*cublasFunc)(Args...); \
std::call_once(cublas_dso_flag, GetCublasDsoHandle, \
&cublas_dso_handle); \
void* p_##__name = dlsym(cublas_dso_handle, #__name); \
std::call_once(cublas_dso_flag, GetCublasDsoHandle, &cublas_dso_handle); \
void *p_##__name = dlsym(cublas_dso_handle, #__name); \
return reinterpret_cast<cublasFunc>(p_##__name)(args...); \
} \
} __name; // struct DynLoad__##__name
......@@ -55,10 +53,10 @@ void* cublas_dso_handle = nullptr;
} __name; // struct DynLoad__##__name
#endif
#define DYNAMIC_LOAD_CUBLAS_V2_WRAP(__name) \
DYNAMIC_LOAD_CUBLAS_WRAP(__name)
#define DYNAMIC_LOAD_CUBLAS_V2_WRAP(__name) DYNAMIC_LOAD_CUBLAS_WRAP(__name)
// include all needed cublas functions in HPPL
// clang-format off
#define CUBLAS_BLAS_ROUTINE_EACH(__macro) \
__macro(cublasSgemv) \
__macro(cublasDgemv) \
......@@ -88,7 +86,7 @@ CUBLAS_BLAS_ROUTINE_EACH(DYNAMIC_LOAD_CUBLAS_V2_WRAP)
} /* namespace dynload */
// clang-format on
#ifndef PADDLE_TYPE_DOUBLE
#define CUBLAS_GEAM dynload::cublasSgeam
#define CUBLAS_GEMV dynload::cublasSgemv
......@@ -103,7 +101,7 @@ CUBLAS_BLAS_ROUTINE_EACH(DYNAMIC_LOAD_CUBLAS_V2_WRAP)
#define CUBLAS_GETRI dynload::cublasDgetriBatched
#endif
const char* hl_cublas_get_error_string(cublasStatus_t status) {
const char *hl_cublas_get_error_string(cublasStatus_t status) {
switch (status) {
case CUBLAS_STATUS_NOT_INITIALIZED:
return "[cublas status]: not initialized";
......@@ -134,9 +132,7 @@ cublasStatus_t g_cublasStat;
#define CHECK_CUBLAS(cublas_func) \
g_cublasStat = cublas_func; \
CHECK_EQ(CUBLAS_STATUS_SUCCESS, g_cublasStat) \
<< "Cublas Error: " \
<< hl_cublas_get_error_string(g_cublasStat) \
<< " "
<< "Cublas Error: " << hl_cublas_get_error_string(g_cublasStat) << " "
void hl_cublas_init(cublasHandle_t *cublas_handle, cudaStream_t stream) {
CHECK_CUBLAS(dynload::cublasCreate(cublas_handle))
......@@ -146,12 +142,8 @@ void hl_cublas_init(cublasHandle_t *cublas_handle, cudaStream_t stream) {
<< "[cublas init] Cublas set stream faild!";
}
void hl_matrix_transpose(real *A_d,
real *C_d,
int dimM,
int dimN,
int lda,
int ldc) {
void hl_matrix_transpose(
real *A_d, real *C_d, int dimM, int dimN, int lda, int ldc) {
real alpha = 1.0;
real beta = 0.0;
......@@ -159,11 +151,18 @@ void hl_matrix_transpose(real *A_d,
CHECK_NOTNULL(C_d);
CHECK_CUBLAS(CUBLAS_GEAM(t_resource.handle,
CUBLAS_OP_T, CUBLAS_OP_N,
dimM, dimN,
&alpha, A_d, lda,
&beta, nullptr, dimM,
C_d, ldc));
CUBLAS_OP_T,
CUBLAS_OP_N,
dimM,
dimN,
&alpha,
A_d,
lda,
&beta,
nullptr,
dimM,
C_d,
ldc));
CHECK_SYNC("hl_matrix_transpose failed");
}
......@@ -188,8 +187,8 @@ void hl_matrix_inverse(real *A_d, real *C_d, int dimN, int lda, int ldc) {
small-sized matrices. There may be a better way to reconstruct
the API for better performance.
*/
CHECK_CUBLAS(CUBLAS_GETRF(t_resource.handle,
dimN, inout_d, lda, pivot_d, info_d, 1));
CHECK_CUBLAS(
CUBLAS_GETRF(t_resource.handle, dimN, inout_d, lda, pivot_d, info_d, 1));
int info_h;
hl_memcpy(&info_h, info_d, sizeof(int));
......@@ -203,8 +202,14 @@ void hl_matrix_inverse(real *A_d, real *C_d, int dimN, int lda, int ldc) {
hl_memcpy(out_d, out_h, sizeof(real *));
CHECK_CUBLAS(CUBLAS_GETRI(t_resource.handle,
dimN, (const real **)inout_d, lda, pivot_d,
out_d, ldc, info_d, 1));
dimN,
(const real **)inout_d,
lda,
pivot_d,
out_d,
ldc,
info_d,
1));
hl_memcpy(&info_h, info_d, sizeof(int));
if (info_h != 0) {
......@@ -218,12 +223,19 @@ void hl_matrix_inverse(real *A_d, real *C_d, int dimN, int lda, int ldc) {
CHECK_SYNC("hl_matrix_inverse failed");
}
void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
void hl_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta,
int lda, int ldb, int ldc) {
int dimM,
int dimN,
int dimK,
real alpha,
real beta,
int lda,
int ldb,
int ldc) {
CHECK_NOTNULL(A_d);
CHECK_NOTNULL(B_d);
CHECK_NOTNULL(C_d);
......@@ -231,8 +243,8 @@ void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
if (dimN == 1 && dimM != 1 && dimK != 1 && transb == HPPL_OP_N) {
int m = (transa == HPPL_OP_N) ? dimM : dimK;
int n = (transa == HPPL_OP_N) ? dimK : dimM;
hl_matrix_mul_vector(A_d, transa, B_d, C_d, m, n,
alpha, beta, lda, ldb, ldc);
hl_matrix_mul_vector(
A_d, transa, B_d, C_d, m, n, alpha, beta, lda, ldb, ldc);
return;
}
......@@ -240,8 +252,7 @@ void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
int m = (transb == HPPL_OP_N) ? dimK : dimN;
int n = (transb == HPPL_OP_N) ? dimN : dimK;
hl_trans_op_t trans = (transb == HPPL_OP_N) ? HPPL_OP_T : HPPL_OP_N;
hl_matrix_mul_vector(B_d, trans, A_d, C_d, m, n,
alpha, beta, ldb, 1, 1);
hl_matrix_mul_vector(B_d, trans, A_d, C_d, m, n, alpha, beta, ldb, 1, 1);
return;
}
......@@ -250,26 +261,47 @@ void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
stat = CUBLAS_GEMM(t_resource.handle,
CUBLAS_OP_N,
CUBLAS_OP_N,
dimN, dimM, dimK,
&alpha, B_d, ldb,
A_d, lda,
&beta, C_d, ldc);
dimN,
dimM,
dimK,
&alpha,
B_d,
ldb,
A_d,
lda,
&beta,
C_d,
ldc);
} else if ((HPPL_OP_T == transa) && (HPPL_OP_N == transb)) {
stat = CUBLAS_GEMM(t_resource.handle,
CUBLAS_OP_N,
CUBLAS_OP_T,
dimN, dimM, dimK,
&alpha, B_d, ldb,
A_d, lda,
&beta, C_d, ldc);
dimN,
dimM,
dimK,
&alpha,
B_d,
ldb,
A_d,
lda,
&beta,
C_d,
ldc);
} else if ((HPPL_OP_N == transa) && (HPPL_OP_T == transb)) {
stat = CUBLAS_GEMM(t_resource.handle,
CUBLAS_OP_T,
CUBLAS_OP_N,
dimN, dimM, dimK,
&alpha, B_d, ldb,
A_d, lda,
&beta, C_d, ldc);
dimN,
dimM,
dimK,
&alpha,
B_d,
ldb,
A_d,
lda,
&beta,
C_d,
ldc);
} else {
LOG(FATAL) << "parameter transa error!";
}
......@@ -277,24 +309,46 @@ void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
CHECK_SYNC("hl_matrix_mul failed");
}
void hl_matrix_mul(real *A_d, hl_trans_op_t transa,
real *B_d, hl_trans_op_t transb,
void hl_matrix_mul(real *A_d,
hl_trans_op_t transa,
real *B_d,
hl_trans_op_t transb,
real *C_d,
int dimM, int dimN, int dimK,
real alpha, real beta) {
int dimM,
int dimN,
int dimK,
real alpha,
real beta) {
int lda = (HPPL_OP_N == transa) ? dimK : dimM;
int ldb = (HPPL_OP_N == transb) ? dimN : dimK;
int ldc = dimN;
hl_matrix_mul(A_d, transa, B_d, transb, C_d, dimM, dimN,
dimK, alpha, beta, lda, ldb, ldc);
hl_matrix_mul(A_d,
transa,
B_d,
transb,
C_d,
dimM,
dimN,
dimK,
alpha,
beta,
lda,
ldb,
ldc);
}
void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
real *B_d, real *C_d,
int dimM, int dimN,
real alpha, real beta,
int lda, int incb, int incc) {
void hl_matrix_mul_vector(real *A_d,
hl_trans_op_t trans,
real *B_d,
real *C_d,
int dimM,
int dimN,
real alpha,
real beta,
int lda,
int incb,
int incc) {
CHECK_NOTNULL(A_d);
CHECK_NOTNULL(B_d);
CHECK_NOTNULL(C_d);
......@@ -303,21 +357,29 @@ void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
if (HPPL_OP_N == trans) {
stat = CUBLAS_GEMV(t_resource.handle,
CUBLAS_OP_T,
dimN, dimM,
dimN,
dimM,
&alpha,
A_d, lda,
B_d, incb,
A_d,
lda,
B_d,
incb,
&beta,
C_d, incc);
C_d,
incc);
} else if (HPPL_OP_T == trans) {
stat = CUBLAS_GEMV(t_resource.handle,
CUBLAS_OP_N,
dimN, dimM,
dimN,
dimM,
&alpha,
A_d, lda,
B_d, incb,
A_d,
lda,
B_d,
incb,
&beta,
C_d, incc);
C_d,
incc);
} else {
LOG(FATAL) << "parameter transa error!";
}
......@@ -326,10 +388,14 @@ void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
CHECK_SYNC("hl_matrix_mul_vector");
}
void hl_matrix_mul_vector(real *A_d, hl_trans_op_t trans,
real *B_d, real *C_d,
int dimM, int dimN,
real alpha, real beta) {
hl_matrix_mul_vector(A_d, trans, B_d, C_d, dimM, dimN,
alpha, beta, dimN, 1, 1);
void hl_matrix_mul_vector(real *A_d,
hl_trans_op_t trans,
real *B_d,
real *C_d,
int dimM,
int dimN,
real alpha,
real beta) {
hl_matrix_mul_vector(
A_d, trans, B_d, C_d, dimM, dimN, alpha, beta, dimN, 1, 1);
}
此差异已折叠。
此差异已折叠。
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#ifdef PADDLE_USE_DSO
#include <mutex>
......@@ -29,7 +28,7 @@ limitations under the License. */
namespace dynload {
extern std::once_flag cudart_dso_flag;
extern void* cudart_dso_handle;
extern void *cudart_dso_handle;
/**
* The following macro definition can generate structs
......@@ -41,14 +40,14 @@ extern void* cudart_dso_handle;
template <typename... Args> \
__type operator()(Args... args) { \
typedef __type (*cudartFunc)(Args...); \
std::call_once(cudart_dso_flag, GetCudartDsoHandle, \
&cudart_dso_handle); \
void* p_##__name = dlsym(cudart_dso_handle, #__name); \
std::call_once(cudart_dso_flag, GetCudartDsoHandle, &cudart_dso_handle); \
void *p_##__name = dlsym(cudart_dso_handle, #__name); \
return reinterpret_cast<cudartFunc>(p_##__name)(args...); \
} \
} __name; /* struct DynLoad__##__name */
/* include all needed cuda functions in HPPL */
// clang-format off
#define CUDA_ROUTINE_EACH(__macro) \
__macro(cudaLaunch, cudaError_t) \
__macro(cudaSetupArgument, cudaError_t) \
......@@ -61,11 +60,12 @@ extern void* cudart_dso_handle;
__macro(__cudaInitModule, char) \
__macro(__cudaRegisterTexture, void) \
__macro(__cudaRegisterSurface, void)
// clang-format on
CUDA_ROUTINE_EACH(DYNAMIC_LOAD_CUDART_WRAP)
#if CUDART_VERSION >= 7000
DYNAMIC_LOAD_CUDART_WRAP(cudaLaunchKernel, cudaError_t)
DYNAMIC_LOAD_CUDART_WRAP(cudaLaunchKernel, cudaError_t)
#endif
#undef CUDA_ROUNTINE_EACH
......@@ -79,12 +79,11 @@ __host__ cudaError_t CUDARTAPI cudaLaunchKernel(const void *func,
void **args,
size_t sharedMem,
cudaStream_t stream) {
return dynload::cudaLaunchKernel(func, gridDim, blockDim,
args, sharedMem, stream);
return dynload::cudaLaunchKernel(
func, gridDim, blockDim, args, sharedMem, stream);
}
#endif /* CUDART_VERSION >= 7000 */
__host__ cudaError_t CUDARTAPI cudaLaunch(const void *func) {
return dynload::cudaLaunch(func);
}
......@@ -99,13 +98,12 @@ __host__ cudaError_t CUDARTAPI cudaConfigureCall(dim3 gridDim,
dim3 blockDim,
size_t sharedMem,
cudaStream_t stream) {
return dynload::cudaConfigureCall(gridDim, blockDim,
sharedMem, stream);
return dynload::cudaConfigureCall(gridDim, blockDim, sharedMem, stream);
}
extern "C" {
void** CUDARTAPI __cudaRegisterFatBinary(void *fatCubin) {
void **CUDARTAPI __cudaRegisterFatBinary(void *fatCubin) {
return dynload::__cudaRegisterFatBinary(fatCubin);
}
......@@ -113,8 +111,7 @@ void CUDARTAPI __cudaUnregisterFatBinary(void **fatCubinHandle) {
return dynload::__cudaUnregisterFatBinary(fatCubinHandle);
}
void CUDARTAPI __cudaRegisterFunction(
void **fatCubinHandle,
void CUDARTAPI __cudaRegisterFunction(void **fatCubinHandle,
const char *hostFun,
char *deviceFun,
const char *deviceName,
......@@ -123,76 +120,78 @@ void CUDARTAPI __cudaRegisterFunction(
uint3 *bid,
dim3 *bDim,
dim3 *gDim,
int *wSize
) {
return dynload::__cudaRegisterFunction(
fatCubinHandle, hostFun, deviceFun, deviceName,
thread_limit, tid, bid, bDim, gDim, wSize);
int *wSize) {
return dynload::__cudaRegisterFunction(fatCubinHandle,
hostFun,
deviceFun,
deviceName,
thread_limit,
tid,
bid,
bDim,
gDim,
wSize);
}
void CUDARTAPI __cudaRegisterVar(
void **fatCubinHandle,
void CUDARTAPI __cudaRegisterVar(void **fatCubinHandle,
char *hostVar,
char *deviceAddress,
const char *deviceName,
int ext,
int size,
int constant,
int global
) {
return dynload::__cudaRegisterVar(
fatCubinHandle, hostVar, deviceAddress,
deviceName, ext, size, constant, global);
int global) {
return dynload::__cudaRegisterVar(fatCubinHandle,
hostVar,
deviceAddress,
deviceName,
ext,
size,
constant,
global);
}
extern void CUDARTAPI __cudaRegisterManagedVar(
void **fatCubinHandle,
extern void CUDARTAPI __cudaRegisterManagedVar(void **fatCubinHandle,
void **hostVarPtrAddress,
char *deviceAddress,
const char *deviceName,
int ext,
int size,
int constant,
int global
) {
return dynload::__cudaRegisterManagedVar(
fatCubinHandle, hostVarPtrAddress, deviceAddress,
deviceName, ext, size, constant, global);
int global) {
return dynload::__cudaRegisterManagedVar(fatCubinHandle,
hostVarPtrAddress,
deviceAddress,
deviceName,
ext,
size,
constant,
global);
}
char CUDARTAPI __cudaInitModule(
void **fatCubinHandle
) {
char CUDARTAPI __cudaInitModule(void **fatCubinHandle) {
return dynload::__cudaInitModule(fatCubinHandle);
}
void CUDARTAPI __cudaRegisterTexture(
void **fatCubinHandle,
void CUDARTAPI __cudaRegisterTexture(void **fatCubinHandle,
const struct textureReference *hostVar,
const void **deviceAddress,
const char *deviceName,
int dim,
int norm,
int ext
) {
int ext) {
return dynload::__cudaRegisterTexture(
fatCubinHandle, hostVar, deviceAddress,
deviceName, dim, norm, ext);
fatCubinHandle, hostVar, deviceAddress, deviceName, dim, norm, ext);
}
void CUDARTAPI __cudaRegisterSurface(
void **fatCubinHandle,
void CUDARTAPI __cudaRegisterSurface(void **fatCubinHandle,
const struct surfaceReference *hostVar,
const void **deviceAddress,
const char *deviceName,
int dim,
int ext
) {
int ext) {
return dynload::__cudaRegisterSurface(
fatCubinHandle, hostVar, deviceAddress,
deviceName, dim, ext);
fatCubinHandle, hostVar, deviceAddress, deviceName, dim, ext);
}
} /* extern "C" */
......
......@@ -12,17 +12,18 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "hl_dso_loader.h"
#include "paddle/utils/Logging.h"
#include "paddle/utils/CommandLineParser.h"
#include "paddle/utils/Logging.h"
P_DEFINE_string(cudnn_dir, "",
P_DEFINE_string(cudnn_dir,
"",
"Specify path for loading libcudnn.so. For instance, "
"/usr/local/cudnn/lib64. If empty [default], dlopen "
"/usr/local/cudnn/lib. If empty [default], dlopen "
"will search cudnn from LD_LIBRARY_PATH");
P_DEFINE_string(cuda_dir, "",
P_DEFINE_string(cuda_dir,
"",
"Specify path for loading cuda library, such as libcublas, "
"libcurand. For instance, /usr/local/cuda/lib64. (Note: "
"libcudart can not be specified by cuda_dir, since some "
......@@ -33,7 +34,6 @@ static inline std::string join(const std::string& part1,
const std::string& part2) {
// directory separator
const char sep = '/';
if (!part2.empty() && part2.front() == sep) {
return part2;
}
......@@ -47,34 +47,37 @@ static inline std::string join(const std::string& part1,
return ret;
}
static inline void GetDsoHandleFromDefaultPath(
std::string& dso_path, void** dso_handle, int dynload_flags) {
static inline void GetDsoHandleFromDefaultPath(std::string& dso_path,
void** dso_handle,
int dynload_flags) {
VLOG(3) << "Try to find cuda library: " << dso_path
<< " from default system path.";
// default search from LD_LIBRARY_PATH/DYLD_LIBRARY_PATH
*dso_handle = dlopen(dso_path.c_str(), dynload_flags);
// DYLD_LIBRARY_PATH is disabled after Mac OS 10.11 to
// bring System Integrity Projection (SIP), if dso_handle
// is null, search from default package path in Mac OS.
#if defined(__APPLE__) || defined(__OSX__)
// DYLD_LIBRARY_PATH is disabled after Mac OS 10.11 to
// bring System Integrity Projection (SIP), if dso_handle
// is null, search from default package path in Mac OS.
#if defined(__APPLE__) || defined(__OSX__)
if (nullptr == *dso_handle) {
dso_path = join("/usr/local/cuda/lib/", dso_path);
*dso_handle = dlopen(dso_path.c_str(), dynload_flags);
if (nullptr == *dso_handle) {
if (dso_path == "libcudnn.dylib") {
LOG(FATAL) << "Note: [Recommend] copy cudnn into /usr/local/cuda/ \n" // NOLINT
<< "For instance, sudo tar -xzf cudnn-7.5-osx-x64-v5.0-ga.tgz -C " // NOLINT
<< "/usr/local \n sudo chmod a+r /usr/local/cuda/include/cudnn.h " // NOLINT
LOG(FATAL)
<< "Note: [Recommend] copy cudnn into /usr/local/cuda/ \n" // NOLINT
<< "For instance, sudo tar -xzf "
"cudnn-7.5-osx-x64-v5.0-ga.tgz -C " // NOLINT
<< "/usr/local \n sudo chmod a+r "
"/usr/local/cuda/include/cudnn.h " // NOLINT
<< "/usr/local/cuda/lib/libcudnn*";
}
}
}
#endif
#endif
}
static inline void GetDsoHandleFromSearchPath(
const std::string& search_root,
static inline void GetDsoHandleFromSearchPath(const std::string& search_root,
const std::string& dso_name,
void** dso_handle) {
int dynload_flags = RTLD_LAZY | RTLD_LOCAL;
......@@ -88,28 +91,40 @@ static inline void GetDsoHandleFromSearchPath(
dlPath = join(search_root, dso_name);
*dso_handle = dlopen(dlPath.c_str(), dynload_flags);
// if not found, search from default path
if (nullptr == dso_handle) {
if (nullptr == *dso_handle) {
LOG(WARNING) << "Failed to find cuda library: " << dlPath;
dlPath = dso_name;
GetDsoHandleFromDefaultPath(dlPath, dso_handle, dynload_flags);
}
}
CHECK(nullptr != *dso_handle)
<< "Failed to find cuda library: " << dlPath << std::endl
<< "Please specify its path correctly using one of the following ways: \n" // NOLINT
<< "Method 1. set cuda and cudnn lib path at runtime. "
<< "http://www.paddlepaddle.org/doc/ui/cmd_argument/argument_outline.html \n" // NOLINT
<< "For instance, issue command: paddle train --use_gpu=1 "
<< "--cuda_dir=/usr/local/cuda/lib64 --cudnn_dir=/usr/local/cudnn/lib ...\n" // NOLINT
<< "Method 2. set environment variable LD_LIBRARY_PATH on Linux or "
CHECK(nullptr != *dso_handle) << "Failed to find cuda library: " << dlPath
<< std::endl
<< "Please specify its path correctly using "
"one of the following ways: \n" // NOLINT
<< "Method 1. set cuda and cudnn lib path at "
"runtime. "
<< "http://www.paddlepaddle.org/doc/ui/"
"cmd_argument/"
"argument_outline.html \n" // NOLINT
<< "For instance, issue command: paddle train "
"--use_gpu=1 "
<< "--cuda_dir=/usr/local/cuda/lib64 "
"--cudnn_dir=/usr/local/cudnn/lib "
"...\n" // NOLINT
<< "Method 2. set environment variable "
"LD_LIBRARY_PATH on Linux or "
<< "DYLD_LIBRARY_PATH on Mac OS. \n"
<< "For instance, issue command: export LD_LIBRARY_PATH=... \n"
<< "Note: After Mac OS 10.11, using the DYLD_LIBRARY_PATH is impossible "
<< "unless System Integrity Protection (SIP) is disabled. However, method 1 " // NOLINT
<< "For instance, issue command: export "
"LD_LIBRARY_PATH=... \n"
<< "Note: After Mac OS 10.11, using the "
"DYLD_LIBRARY_PATH is impossible "
<< "unless System Integrity Protection (SIP) "
"is disabled. However, "
"method 1 " // NOLINT
<< "always work well.";
}
......
......@@ -12,24 +12,15 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include "avx_mathfun.h"
namespace hppl {
__m256 exp(__m256 a) {
return exp256_ps(a);
}
__m256 exp(__m256 a) { return exp256_ps(a); }
__m256 log(__m256 a) {
return log256_ps(a);
}
__m256 log(__m256 a) { return log256_ps(a); }
__m256 sin(__m256 a) {
return sin256_ps(a);
}
__m256 sin(__m256 a) { return sin256_ps(a); }
__m256 cos(__m256 a) {
return cos256_ps(a);
}
__m256 cos(__m256 a) { return cos256_ps(a); }
} // namespace hppl
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#include <chrono>
#include <stdlib.h>
#include <iostream>
......@@ -25,4 +24,3 @@ int64_t getCurrentTimeStick() {
high_resolution_clock::duration dtn = tp.time_since_epoch();
return dtn.count();
}
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include <string>
#include <vector>
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include "DataProvider.h"
......@@ -65,8 +64,8 @@ void DataProviderGroup<T>::reset() {
provider_ = nullptr;
// shuffle file list
std::shuffle(fileList_.begin(), fileList_.end(),
ThreadLocalRandomEngine::get());
std::shuffle(
fileList_.begin(), fileList_.end(), ThreadLocalRandomEngine::get());
startLoader();
DataProvider::reset();
......@@ -113,8 +112,9 @@ void DataProviderGroup<T>::startLoader() {
size_t endPos = std::min(fileList_.size(), startPos + loadFileCount);
std::vector<std::string> fileVec(fileList_.begin() + startPos,
fileList_.begin() + endPos);
loader_->addJob([this, fileVec]()
-> ProviderPtrType { return this->loadFile(fileVec); });
loader_->addJob([this, fileVec]() -> ProviderPtrType {
return this->loadFile(fileVec);
});
}
loader_->stopAddJob();
}
......
......@@ -12,7 +12,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */
#pragma once
#include "DataProvider.h"
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册