提交 8ddc5faa 编写于 作者: L liaogang

Update Mac OS X port

* follow comments to fix bugs
上级 54c37ab7
...@@ -44,8 +44,8 @@ set(ATLAS_LIB_SEARCH_PATHS ...@@ -44,8 +44,8 @@ set(ATLAS_LIB_SEARCH_PATHS
/usr/lib /usr/lib
/usr/lib/blas/atlas /usr/lib/blas/atlas
/usr/lib/atlas /usr/lib/atlas
/usr/lib/atlas-base) # special for ubuntu 14.04. /usr/lib/atlas-base # special for ubuntu 14.04.
)
find_path(ATLAS_INC_DIR NAMES cblas.h find_path(ATLAS_INC_DIR NAMES cblas.h
PATHS ${ATLAS_INCLUDE_SEARCH_PATHS}) PATHS ${ATLAS_INCLUDE_SEARCH_PATHS})
find_library(ATLAS_CBLAS_LIB NAMES cblas libcblas.so.3 find_library(ATLAS_CBLAS_LIB NAMES cblas libcblas.so.3
......
...@@ -24,7 +24,9 @@ function(target_circle_link_libraries TARGET_NAME) ...@@ -24,7 +24,9 @@ function(target_circle_link_libraries TARGET_NAME)
list(APPEND libsInArgn ${arg}) list(APPEND libsInArgn ${arg})
endif() endif()
endforeach() endforeach()
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
list(APPEND LIBS "-undefined dynamic_lookup")
endif()
list(REVERSE libsInArgn) list(REVERSE libsInArgn)
target_link_libraries(${TARGET_NAME} target_link_libraries(${TARGET_NAME}
${LIBS} ${LIBS}
......
Build and Install Installing from Sources
================= =================
* [1. Requirement](#Requirement) * [1. Download and Setup](#download)
* [2. Build on Ubuntu](#ubuntu) * [2. Requirements](#requirements)
* [3. Build on Mac OS X](#mac) * [3. Build on Ubuntu](#ubuntu)
* [4. Build on Mac OS X](#mac)
## <span id="Requirement">Requirement</span> ## <span id="download">Download and Setup</span>
You can download PaddlePaddle from the [github source](https://github.com/gangliao/Paddle).
### Dependents ```bash
git clone https://github.com/baidu/Paddle paddle
```
- **CMake**: required for 2.8+ version ## <span id="requirements">Requirements</span>
- **g++**: a recent c++ compiler supporting c++11, >= 4.6, < 5
- **BLAS library**: such as openBLAS, MKL, ATLAS
- **protobuf**: required for 2.4+ version, 3.x is not supported
- **python**: currently only 2.7 version is supported
### Optional To compile the source code, your computer must be equipped with GCC >=4.6 or Clang Compiler.
### Dependencies
PaddlePaddle also support some build options, you have to install related libraries. - **CMake**: version >= 2.8
- **BLAS**: MKL, OpenBlas or ATLAS
- **protobuf**: version >= 2.4, **Note: 3.x is not supported**
- **python**: only python 2.7 is supported currently
- **WITH_GPU**: Compile with gpu mode ### Options
- The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5
- Other versions Cuda Toolkit 6.5, 7.0 and cuDNN v2, v3, v4 are also supported
- Note: to utilize cuDNN v5, Cuda Toolkit 7.5 is prerequisite and vice versa
- **WITH_DOUBLE**: Compile with double precision, otherwise use single precision
- **WITH_GLOG**: Compile with glog, otherwise use a log implement internally
- **WITH_GFLAGS**: Compile with gflags, otherwise use a flag implement internally
- **WITH_TESTING**: Compile with gtest and run unittest for PaddlePaddle
- **WITH_DOC**: Compile with documentation
- **WITH_SWIG_PY**: Compile with python predict api
- **WITH_STYLE_CHECK**: Style check for source code
PaddlePaddle supports some build options. To enable it, first you need to install the related libraries.
## <span id="ubuntu">Building on Ubuntu14.04</span> Optional | Description
------------ | :-----------
**WITH_GPU** | Compile with GPU mode.
**WITH_DOUBLE** | Compile with double precision floating-point, default: single precision. |
**WITH_GLOG** | Compile with glog. If not found, default: an internal log implementation.
**WITH_GFLAGS** | Compile with gflags. If not found, default: an internal flag implementation.
**WITH_TESTING** | Compile with gtest for PaddlePaddle's unit testing.
**WITH_DOC** | Compile to generate PaddlePaddle's docs, default: disabled (OFF).
**WITH_SWIG_PY** | Compile with python predict API, default: disabled (OFF).
**WITH_STYLE_CHECK**| Compile with code style check, default: enabled (ON).
|
### Install Dependencies **Note:**
- The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5.
- Other versions like Cuda Toolkit 6.5, 7.0, 8.0 and cuDNN v2, v3, v4 are also supported.
- **To utilize cuDNN v5, Cuda Toolkit 7.5 is prerequisite and vice versa.**
- **CPU Dependencies** As a simple example, consider the following:
```bash 1. **Python Dependencies(optional)**
# necessary
sudo apt-get update
sudo apt-get install -y g++ make cmake build-essential libatlas-base-dev python python-pip libpython-dev m4 libprotobuf-dev protobuf-compiler python-protobuf python-numpy git
# optional
sudo apt-get install libgoogle-glog-dev
sudo apt-get install libgflags-dev
sudo apt-get install libgtest-dev
sudo pip install wheel
pushd /usr/src/gtest
cmake .
make
sudo cp *.a /usr/lib
popd
```
To compile PaddlePaddle with python predict API, make sure swig installed and set `-DWITH_SWIG_PY=ON` as follows:
- **GPU Dependencies(optional)** ```bash
# install swig on ubuntu
sudo apt-get install swig
# install swig on Mac OS X
brew install swig
If you need to build GPU version, the first thing you need is a machine that has GPU and CUDA installed. # active swig in cmake
And you also need to install cuDNN. cmake .. -DWITH_SWIG_PY=ON
```
You can download CUDA toolkit and cuDNN from nvidia website: 2. **Doc Dependencies(optional)**
```bash To generate PaddlePaddle's documentation, install dependencies and set `-DWITH_DOC=ON` as follows:
https://developer.nvidia.com/cuda-downloads
https://developer.nvidia.com/cudnn
```
You can copy cuDNN files into the CUDA toolkit directory, such as:
```bash ```bash
sudo tar -xzf cudnn-7.5-linux-x64-v5.1.tgz -C /usr/local pip install 'sphinx>=1.4.0'
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* pip install sphinx_rtd_theme breathe recommonmark
```
Then you need to set LD\_LIBRARY\_PATH, CUDA\_HOME and PATH environment variables in ~/.bashrc.
```bash # install doxygen on Ubuntu
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH sudo apt-get install doxygen
export CUDA_HOME=/usr/local/cuda # install doxygen on Mac OS X
export PATH=/usr/local/cuda/bin:$PATH brew install doxygen
```
- **Python Dependencies(optional)**
If you want to compile PaddlePaddle with python predict api, you need to add -DWITH_SWIG_PY=ON in cmake command and install these first: # active docs in cmake
cmake .. -DWITH_DOC=ON`
```
```bash ## <span id="ubuntu">Build on Ubuntu 14.04</span>
sudo apt-get install swig
```
- **Doc Dependencies(optional)** ### Install Dependencies
If you want to compile PaddlePaddle with doc, you need to add -DWITH_DOC=ON in cmake command and install these first: - **CPU Dependencies**
```bash ```bash
pip install 'sphinx>=1.4.0' # necessary
pip install sphinx_rtd_theme breathe recommonmark sudo apt-get update
sudo apt-get install doxygen sudo apt-get install -y g++ make cmake build-essential libatlas-base-dev python python-pip libpython-dev m4 libprotobuf-dev protobuf-compiler python-protobuf python-numpy git
``` # optional
sudo apt-get install libgoogle-glog-dev
sudo apt-get install libgflags-dev
sudo apt-get install libgtest-dev
sudo pip install wheel
pushd /usr/src/gtest
cmake .
make
sudo cp *.a /usr/lib
popd
```
- **GPU Dependencies (optional)**
To build GPU version, you will need the following installed:
1. a CUDA-capable GPU
2. A supported version of Linux with a gcc compiler and toolchain
3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)
4. NVIDIA cuDNN Library (availabel at https://developer.nvidia.com/cudnn)
The CUDA development environment relies on tight integration with the host development environment,
including the host compiler and C runtime libraries, and is therefore only supported on
distribution versions that have been qualified for this CUDA Toolkit release.
After downloading cuDNN library, issue the following commands:
```bash
sudo tar -xzf cudnn-7.5-linux-x64-v5.1.tgz -C /usr/local
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
```
Then you need to set LD\_LIBRARY\_PATH, CUDA\_HOME and PATH environment variables in ~/.bashrc.
```bash
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=/usr/local/cuda
export PATH=/usr/local/cuda/bin:$PATH
```
### Build and Install ### Build and Install
CMake will find dependent libraries in system default paths first. After installing some optional libraries, corresponding build option will automatically be on(such as glog, gtest and gflags). And if libraries are not found, you have to set following variables manually in cmake command(CUDNN_ROOT, ATLAS_ROOT, MKL_ROOT, OPENBLAS_ROOT). As usual, the best option is to create build folder under paddle project directory.
Here are some examples of cmake command with different options:
**only cpu**
```bash ```bash
cmake -DWITH_GPU=OFF -DWITH_DOC=OFF mkdir build && cd build
cmake ..
``` ```
**gpu** CMake first check PaddlePaddle's dependecies in system default path. After installing some optional
libraries, corresponding build option will be set automatically (for instance, glog, gtest and gflags).
If still not found, you can manually set it based on CMake error information from your screen.
```bash As a simple example, consider the following:
cmake -DWITH_GPU=ON -DWITH_DOC=OFF
```
**gpu with doc and swig** - **Only CPU**
```bash ```bash
cmake -DWITH_GPU=ON -DWITH_DOC=ON -DWITH_SWIG_PY=ON cmake .. -DWITH_GPU=OFF -DWITH_DOC=OFF
``` ```
- **GPU**
```bash
cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF
```
- **GPU with doc and swig**
```bash
cmake .. -DWITH_GPU=ON -DWITH_DOC=ON -DWITH_SWIG_PY=ON
```
Finally, you can download source code and build: Finally, you can download source code and build:
```bash ```bash
git clone https://github.com/baidu/Paddle paddle
cd paddle
mkdir build
cd build
# you can add build option here, such as: # you can add build option here, such as:
cmake -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_INSTALL_PREFIX=<path to install> .. cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_INSTALL_PREFIX=<path to install>
# please use sudo make install, if you want # please use sudo make install, if you want
# to install PaddlePaddle into the system # to install PaddlePaddle into the system
make -j `nproc` && make install make -j `nproc` && make install
# PaddlePaddle installation path # set PaddlePaddle installation path in ~/.bashrc
export PATH=<path to install>/bin:$PATH export PATH=<path to install>/bin:$PATH
``` ```
**Note**
And if you set WITH_SWIG_PY=ON, you have to install related python predict api at the same time: **Note:**
If you set `WITH_SWIG_PY=ON`, related python dependencies also need to be installed.
Otherwise, PaddlePaddle will automatically install python dependencies
at first time when user run paddle commands, such as `paddle version`, `paddle train`.
It may require sudo privileges:
```bash ```bash
pip install <path to install>/opt/paddle/share/wheels/*.whl # you can run
sudo pip install <path to install>/opt/paddle/share/wheels/*.whl
# or just run
sudo paddle version
``` ```
## <span id="mac">Building on Mac OS X</span> ## <span id="mac">Building on Mac OS X</span>
### Prerequisites ### Prerequisites
...@@ -150,7 +191,7 @@ This guide is based on Mac OS X 10.11 (El Capitan). Note that if you are running ...@@ -150,7 +191,7 @@ This guide is based on Mac OS X 10.11 (El Capitan). Note that if you are running
you will already have Python 2.7.10 and Numpy 1.8 installed. you will already have Python 2.7.10 and Numpy 1.8 installed.
The best option is to use the package manager homebrew to handle installations and upgrades for you. The best option is to use the package manager homebrew to handle installations and upgrades for you.
To install homebrew, first open a terminal window (you can find Terminal in the Utilities folder in Applications), and issue the command: To install [homebrew](http://brew.sh/), first open a terminal window (you can find Terminal in the Utilities folder in Applications), and issue the command:
```bash ```bash
# install brew # install brew
...@@ -163,109 +204,103 @@ easy_install pip ...@@ -163,109 +204,103 @@ easy_install pip
- **CPU Dependencies** - **CPU Dependencies**
```bash ```bash
# Install fundamental dependents # Install fundamental dependents
brew install glog gflags cmake protobuf openblas brew install glog gflags cmake protobuf openblas
# Install google test on Mac OS X # Install google test on Mac OS X
# Download gtest 1.7.0 # Download gtest 1.7.0
wget https://github.com/google/googletest/archive/release-1.7.0.tar.gz wget https://github.com/google/googletest/archive/release-1.7.0.tar.gz
tar -xvf googletest-release-1.7.0.tar.gz && cd googletest-release-1.7.0 tar -xvf googletest-release-1.7.0.tar.gz && cd googletest-release-1.7.0
# Build gtest # Build gtest
mkdir build && cmake .. mkdir build && cmake ..
make make
# Install gtest library # Install gtest library
sudo cp -r ../include/gtest /usr/local/include/ sudo cp -r ../include/gtest /usr/local/include/
sudo cp lib*.a /usr/local/lib sudo cp lib*.a /usr/local/lib
``` ```
- **GPU Dependencies(optional)** - **GPU Dependencies(optional)**
If you need to build GPU version, the first thing you need is a machine that has NVIDIA GPU and CUDA installed. To build GPU version, you will need the following installed:
And you also need to install cuDNN.
You can download CUDA toolkit and cuDNN from nvidia website:
```bash 1. a CUDA-capable GPU
https://developer.nvidia.com/cuda-downloads 2. Mac OS X 10.11 or later
https://developer.nvidia.com/cudnn 2. the Clang compiler and toolchain installed using Xcode
``` 3. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads)
You can copy cuDNN files into the CUDA toolkit directory, for instance: 4. NVIDIA cuDNN Library (availabel at https://developer.nvidia.com/cudnn)
```bash The CUDA development environment relies on tight integration with the host development environment,
sudo tar -xzf cudnn-7.5-osx-x64-v5.0-ga.tgz -C /usr/local including the host compiler and C runtime libraries, and is therefore only supported on
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* distribution versions that have been qualified for this CUDA Toolkit release.
```
Then you need to set DYLD\_LIBRARY\_PATH, CUDA\_HOME and PATH environment variables in ~/.bashrc.
```bash 1. After downloading cuDNN library, issue the following commands:
export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH
export PATH=/usr/local/cuda/bin:$PATH
```
- **Python Dependencies(optional)**
If you want to compile PaddlePaddle with python predict API, you need to add -DWITH_SWIG_PY=ON in cmake command and install these first: ```bash
sudo tar -xzf cudnn-7.5-osx-x64-v5.0-ga.tgz -C /usr/local
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
```
2. Then you need to set DYLD\_LIBRARY\_PATH, CUDA\_HOME and PATH environment variables in ~/.bashrc.
```bash ```bash
brew install swig export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH
``` export PATH=/usr/local/cuda/bin:$PATH
```
- **Doc Dependencies(optional)** ### Build and Install
If you want to compile PaddlePaddle with doc, you need to add -DWITH_DOC=ON in cmake command and install these first: As usual, the best option is to create build folder under paddle project directory.
```bash ```bash
pip install 'sphinx>=1.4.0' mkdir build && cd build
pip install sphinx_rtd_theme breathe recommonmark cmake ..
brew install doxygen
``` ```
### Build and Install CMake first check PaddlePaddle's dependecies in system default path. After installing some optional
libraries, corresponding build option will be set automatically (for instance, glog, gtest and gflags).
If still not found, you can manually set it based on CMake error information from your screen.
CMake can find dependent libraries in system default paths firstly. As a simple example, consider the following:
After installing some optional libraries, corresponding build option will be on automatically (for instance, glog, gtest and gflags).
If not found, you have to set following variables manually via CMake command (CUDNN_ROOT, ATLAS_ROOT, MKL_ROOT, OPENBLAS_ROOT).
Here are some examples of CMake command with different options: - **Only CPU**
**only cpu** ```bash
cmake .. -DWITH_GPU=OFF -DWITH_DOC=OFF
```
- **GPU**
```bash ```bash
cmake -DWITH_GPU=OFF -DWITH_DOC=OFF cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF
``` ```
**gpu** - **GPU with doc and swig**
```bash ```bash
cmake -DWITH_GPU=ON -DWITH_DOC=OFF cmake .. -DWITH_GPU=ON -DWITH_DOC=ON -DWITH_SWIG_PY=ON
``` ```
**gpu with doc and swig** Finally, you can build PaddlePaddle:
```bash ```bash
cmake -DWITH_GPU=ON -DWITH_DOC=ON -DWITH_SWIG_PY=ON
```
Finally, you can download source code and build:
```bash
git clone https://github.com/baidu/Paddle paddle
cd paddle
mkdir build
cd build
# you can add build option here, such as: # you can add build option here, such as:
cmake -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_INSTALL_PREFIX=<path to install> .. cmake .. -DWITH_GPU=ON -DWITH_DOC=OFF -DCMAKE_INSTALL_PREFIX=<installation path>
# please use sudo make install, if you want # please use sudo make install, if you want to install PaddlePaddle into the system
# to install PaddlePaddle into the system
make -j `nproc` && make install make -j `nproc` && make install
# PaddlePaddle installation path # set PaddlePaddle installation path in ~/.bashrc
export PATH=<path to install>/bin:$PATH export PATH=<installation path>/bin:$PATH
``` ```
**Note**
And if you set WITH_SWIG_PY=ON, you have to install related python predict api at the same time:
**Note:**
If you set `WITH_SWIG_PY=ON`, related python dependencies also need to be installed.
Otherwise, PaddlePaddle will automatically install python dependencies
at first time when user run paddle commands, such as `paddle version`, `paddle train`.
It may require sudo privileges:
```bash ```bash
# you can run
sudo pip install <path to install>/opt/paddle/share/wheels/*.whl sudo pip install <path to install>/opt/paddle/share/wheels/*.whl
# or just run
sudo paddle version
``` ```
\ No newline at end of file
...@@ -239,7 +239,7 @@ void Matrix::toNumpyMatInplace(float** view_data, int* dim1, ...@@ -239,7 +239,7 @@ void Matrix::toNumpyMatInplace(float** view_data, int* dim1,
} }
void Matrix::copyToNumpyMat(float** view_m_data, int* dim1, void Matrix::copyToNumpyMat(float** view_m_data, int* dim1,
int* dim2) throw(UnsupportError) { int* dim2) throw(UnsupportError) {
static_assert(sizeof(float) == sizeof(float), static_assert(sizeof(paddle::real) == sizeof(float),
"Currently PaddleAPI only support for single " "Currently PaddleAPI only support for single "
"precision version of paddle."); "precision version of paddle.");
if (this->isSparse()) { if (this->isSparse()) {
...@@ -251,12 +251,12 @@ void Matrix::copyToNumpyMat(float** view_m_data, int* dim1, ...@@ -251,12 +251,12 @@ void Matrix::copyToNumpyMat(float** view_m_data, int* dim1,
if (auto cpuMat = dynamic_cast<paddle::CpuMatrix*>(m->mat.get())) { if (auto cpuMat = dynamic_cast<paddle::CpuMatrix*>(m->mat.get())) {
auto src = cpuMat->getData(); auto src = cpuMat->getData();
auto dest = *view_m_data; auto dest = *view_m_data;
std::memcpy(dest, src, sizeof(float) * (*dim1) * (*dim2)); std::memcpy(dest, src, sizeof(paddle::real) * (*dim1) * (*dim2));
} else if (auto gpuMat = dynamic_cast<paddle::GpuMatrix*>(m->mat.get())) { } else if (auto gpuMat = dynamic_cast<paddle::GpuMatrix*>(m->mat.get())) {
auto src = gpuMat->getData(); auto src = gpuMat->getData();
auto dest = *view_m_data; auto dest = *view_m_data;
hl_memcpy_device2host(dest, src, hl_memcpy_device2host(dest, src,
sizeof(float) * (*dim1) * (*dim2)); sizeof(paddle::real) * (*dim1) * (*dim2));
} else { } else {
LOG(WARNING) << "Unexpected Situation"; LOG(WARNING) << "Unexpected Situation";
throw UnsupportError(); throw UnsupportError();
......
...@@ -385,10 +385,17 @@ void NeuralNetwork::setOutputGrad(const std::vector<Argument>& args) { ...@@ -385,10 +385,17 @@ void NeuralNetwork::setOutputGrad(const std::vector<Argument>& args) {
} }
} }
extern NeuralNetwork* newCustomNerualNetwork(
const std::string& name, NeuralNetwork* network) __attribute__((weak));
NeuralNetwork* NeuralNetwork::newNeuralNetwork( NeuralNetwork* NeuralNetwork::newNeuralNetwork(
const std::string& name, const std::string& name,
NeuralNetwork* rootNetwork) { NeuralNetwork* rootNetwork) {
if (newCustomNerualNetwork) {
return newCustomNerualNetwork(name, rootNetwork);
} else {
return new NeuralNetwork(name, rootNetwork); return new NeuralNetwork(name, rootNetwork);
}
} }
} // namespace paddle } // namespace paddle
...@@ -94,7 +94,11 @@ TEST(checkGradient, multi) { ...@@ -94,7 +94,11 @@ TEST(checkGradient, multi) {
TEST(checkGradient, hsigmoid) { checkGradientTest(configFile2, false, false); } TEST(checkGradient, hsigmoid) { checkGradientTest(configFile2, false, false); }
TEST(checkGradient, chunk) { TEST(checkGradient, chunk) {
#if defined(__APPLE__) || defined (__OSX__)
EXPECT_EQ(0, system("python trainer/tests/gen_proto_data.py")); EXPECT_EQ(0, system("python trainer/tests/gen_proto_data.py"));
#else
EXPECT_EQ(0, system("python2 trainer/tests/gen_proto_data.py"));
#endif
checkGradientTest(configFile3, false, false); checkGradientTest(configFile3, false, false);
#ifndef PADDLE_ONLY_CPU #ifndef PADDLE_ONLY_CPU
checkGradientTest(configFile3, true, true); checkGradientTest(configFile3, true, true);
......
...@@ -144,12 +144,12 @@ PyObjectPtr createPythonClass( ...@@ -144,12 +144,12 @@ PyObjectPtr createPythonClass(
const std::map<std::string, std::string>& kwargs) { const std::map<std::string, std::string>& kwargs) {
PyGuard guard; PyGuard guard;
PyObjectPtr pyModule(PyImport_ImportModule(moduleName.c_str())); PyObjectPtr pyModule(PyImport_ImportModule(moduleName.c_str()));
// LOG(INFO) << "createPythonClass moduleName.c_str:" << moduleName.c_str(); LOG(INFO) << "createPythonClass moduleName.c_str:" << moduleName.c_str();
CHECK_PY(pyModule) << "Import module " << moduleName << " failed."; CHECK_PY(pyModule) << "Import module " << moduleName << " failed.";
PyObjectPtr pyDict(PyModule_GetDict(pyModule.get())); PyObjectPtr pyDict(PyModule_GetDict(pyModule.get()));
CHECK_PY(pyDict) << "Get Dict failed."; CHECK_PY(pyDict) << "Get Dict failed.";
PyObjectPtr pyClass(PyDict_GetItemString(pyDict.get(), className.c_str())); PyObjectPtr pyClass(PyDict_GetItemString(pyDict.get(), className.c_str()));
// LOG(INFO) << "createPythonClass className.c_str():" << className.c_str(); LOG(INFO) << "createPythonClass className.c_str():" << className.c_str();
CHECK_PY(pyClass) << "Import class " << className << " failed."; CHECK_PY(pyClass) << "Import class " << className << " failed.";
PyObjectPtr argsObjectList(PyTuple_New(args.size())); PyObjectPtr argsObjectList(PyTuple_New(args.size()));
for (size_t i = 0; i < args.size(); ++i) { for (size_t i = 0; i < args.size(); ++i) {
......
...@@ -35,13 +35,6 @@ limitations under the License. */ ...@@ -35,13 +35,6 @@ limitations under the License. */
#include <Python.h> #include <Python.h>
#include <frameobject.h> #include <frameobject.h>
// #ifndef _POSIX_C_SOURCE
// #warning "no _POSIX_C_SOURCE defined in Python.h"
// #endif
// #ifndef _XOPEN_SOURCE
// #warning "no _XOPEN_SOURCE defined in Python.h"
// #endif
#endif #endif
#include "paddle/utils/Util.h" #include "paddle/utils/Util.h"
......
...@@ -13,28 +13,12 @@ See the License for the specific language governing permissions and ...@@ -13,28 +13,12 @@ See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include "Stat.h" #include "Stat.h"
#include "Util.h"
#include <sys/syscall.h> // for syscall()
#include <sys/types.h>
#include <iomanip> #include <iomanip>
#include <algorithm> #include <algorithm>
namespace paddle { namespace paddle {
// return the thread id used by glog
pid_t getTID() {
#if defined(__APPLE__) || defined(__OSX__)
pid_t tid = syscall(SYS_thread_selfid);
#else
#ifndef __NR_gettid
#define __NR_gettid 224
#endif
pid_t tid = syscall(__NR_gettid);
#endif
CHECK_NE(tid, -1);
return tid;
}
StatSet globalStat("GlobalStatInfo"); StatSet globalStat("GlobalStatInfo");
void Stat::addSample(uint64_t value) { void Stat::addSample(uint64_t value) {
......
...@@ -13,24 +13,10 @@ See the License for the specific language governing permissions and ...@@ -13,24 +13,10 @@ See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#pragma once #pragma once
#include "Util.h"
#include "Logging.h" #include "Logging.h"
#include <thread> #include <thread>
#include <sys/syscall.h>
#include <unistd.h>
inline pid_t gettid() {
#if defined(__APPLE__) || defined(__OSX__)
pid_t tid = syscall(SYS_thread_selfid);
#else
#ifndef __NR_gettid
#define __NR_gettid 224
#endif
pid_t tid = syscall(__NR_gettid);
#endif
CHECK_NE(tid, -1);
return tid;
}
#include "Queue.h" #include "Queue.h"
#include "ThreadLocal.h" #include "ThreadLocal.h"
...@@ -186,7 +172,7 @@ public: ...@@ -186,7 +172,7 @@ public:
jobFinishBarrier_(numWorkers + 1), jobFinishBarrier_(numWorkers + 1),
jobFunc_(nullptr), jobFunc_(nullptr),
checkOwner_(checkOwner) { checkOwner_(checkOwner) {
ownerThreadId_ = ::gettid(); ownerThreadId_ = getTID();
workers_.resize(numWorkers); workers_.resize(numWorkers);
start(); start();
} }
...@@ -210,7 +196,7 @@ public: ...@@ -210,7 +196,7 @@ public:
*/ */
void exec(JobFunc jobFunc, JobFunc ownerFunc = nullptr) { void exec(JobFunc jobFunc, JobFunc ownerFunc = nullptr) {
if (checkOwner_) { if (checkOwner_) {
CHECK_EQ(ownerThreadId_, ::gettid()) CHECK_EQ(ownerThreadId_, getTID())
<< "this sync thread pool should be used in one thread"; << "this sync thread pool should be used in one thread";
} }
......
...@@ -12,10 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ...@@ -12,10 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include "Util.h"
#include "ThreadLocal.h" #include "ThreadLocal.h"
#include "Thread.h"
#include "CommandLineParser.h" #include "CommandLineParser.h"
P_DEFINE_bool(thread_local_rand_use_global_seed, false, P_DEFINE_bool(thread_local_rand_use_global_seed, false,
...@@ -31,11 +29,11 @@ unsigned int* ThreadLocalRand::getSeed() { ...@@ -31,11 +29,11 @@ unsigned int* ThreadLocalRand::getSeed() {
if (!p) { // init seed if (!p) { // init seed
if (FLAGS_thread_local_rand_use_global_seed) { if (FLAGS_thread_local_rand_use_global_seed) {
p = new unsigned int(defaultSeed_); p = new unsigned int(defaultSeed_);
} else if (getpid() == gettid()) { // main thread } else if (getpid() == getTID()) { // main thread
// deterministic, but differs from global srand() // deterministic, but differs from global srand()
p = new unsigned int(defaultSeed_ - 1); p = new unsigned int(defaultSeed_ - 1);
} else { } else {
p = new unsigned int(defaultSeed_ + gettid()); p = new unsigned int(defaultSeed_ + getTID());
LOG(INFO) << "thread use undeterministic rand seed:" << *p; LOG(INFO) << "thread use undeterministic rand seed:" << *p;
} }
seed_.set(p); seed_.set(p);
...@@ -51,7 +49,7 @@ std::default_random_engine& ThreadLocalRandomEngine::get() { ...@@ -51,7 +49,7 @@ std::default_random_engine& ThreadLocalRandomEngine::get() {
int defaultSeed = ThreadLocalRand::getDefaultSeed(); int defaultSeed = ThreadLocalRand::getDefaultSeed();
engine->seed(FLAGS_thread_local_rand_use_global_seed engine->seed(FLAGS_thread_local_rand_use_global_seed
? defaultSeed ? defaultSeed
: defaultSeed + gettid()); : defaultSeed + getTID());
engine_.set(engine); engine_.set(engine);
} }
return *engine; return *engine;
......
...@@ -93,6 +93,19 @@ static void installProfilerSwitch() {} ...@@ -93,6 +93,19 @@ static void installProfilerSwitch() {}
namespace paddle { namespace paddle {
pid_t getTID() {
#if defined(__APPLE__) || defined(__OSX__)
pid_t tid = syscall(SYS_thread_selfid);
#else
#ifndef __NR_gettid
#define __NR_gettid 224
#endif
pid_t tid = syscall(__NR_gettid);
#endif
CHECK_NE(tid, -1);
return tid;
}
static bool g_initialized = false; static bool g_initialized = false;
typedef std::pair<int, std::function<void()>> PriorityFuncPair; typedef std::pair<int, std::function<void()>> PriorityFuncPair;
typedef std::vector<PriorityFuncPair> InitFuncList; typedef std::vector<PriorityFuncPair> InitFuncList;
......
...@@ -24,6 +24,8 @@ limitations under the License. */ ...@@ -24,6 +24,8 @@ limitations under the License. */
#include <unordered_map> #include <unordered_map>
#include <mutex> #include <mutex>
#include <functional> #include <functional>
#include <sys/syscall.h> // for syscall()
#include <sys/types.h>
#include "CommandLineParser.h" #include "CommandLineParser.h"
#include "Logging.h" #include "Logging.h"
...@@ -63,6 +65,9 @@ limitations under the License. */ ...@@ -63,6 +65,9 @@ limitations under the License. */
namespace paddle { namespace paddle {
// return the thread id used by glog
pid_t getTID();
/** /**
* return the 1-based index of the highest bit set * return the 1-based index of the highest bit set
* *
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册