diff --git a/doc/build_and_install/index_cn.rst b/doc/build_and_install/index_cn.rst index c0b60f55895f11bcbaa06bc65c973180b3661cfc..e079bb661f3a5141a09dfbc6893d1bf945697bc9 100644 --- a/doc/build_and_install/index_cn.rst +++ b/doc/build_and_install/index_cn.rst @@ -3,30 +3,54 @@ .. _install_steps: -安装流程 -++++++++ +PaddlePaddle针对不同的用户群体提供了多种安装方式。 -PaddlePaddle提供pip和Docker的安装方式: +专注深度学习模型开发 +----------------- + +PaddlePaddle提供了多种python wheel包,可通过pip一键安装: + +.. toctree:: + :maxdepth: 1 + + pip_install_cn.rst + +这是最便捷的安装方式,请根据机器配置和系统选择对应的安装包。 + +关注底层框架 +---------- + +PaddlePaddle提供了基于Docker的安装方式,请参照以下教程: .. toctree:: - :maxdepth: 1 + :maxdepth: 1 - pip_install_cn.rst - docker_install_cn.rst + docker_install_cn.rst -编译流程 -++++++++ +我们推荐在Docker中运行PaddlePaddle,该方式具有以下优势: -.. warning:: +- 无需单独安装第三方依赖 +- 方便分享运行时环境,易于问题的复现 - 建议直接使用上述安装流程,方便快速安装。只有在遇到需要独立定制的二进制时才需要编译。 +对于有定制化二进制文件需求的用户,我们同样提供了从源码编译安装PaddlePaddle的方法: -.. toctree:: +.. toctree:: :maxdepth: 1 build_from_source_cn.rst -常见问题解答 -++++++++++ +.. warning:: + + 需要提醒的是,这种安装方式会涉及到一些第三方库的下载、编译及安装,整个安装过程耗时较长。 + + +常见问题汇总 +----------- + +如果在安装过程中遇到了问题,请先尝试在下面的页面寻找答案: + +:ref:`常见问题解答 ` + +如果问题没有得到解决,欢迎向PaddlePaddle社区反馈问题: -`常见问题解答 `_ +`创建issue `_ diff --git a/doc/faq/build_and_install/index_cn.rst b/doc/faq/build_and_install/index_cn.rst index 31d2252eb5f5f6a87b1c93f36008fc4468795896..7c7e896d187e4fe1544d7ec933fa4fa9f24df3cd 100644 --- a/doc/faq/build_and_install/index_cn.rst +++ b/doc/faq/build_and_install/index_cn.rst @@ -1,3 +1,5 @@ +.. _install_faq: + ################### 编译安装与单元测试 ################### diff --git a/doc/howto/cmd_parameter/index_cn.rst b/doc/howto/cmd_parameter/index_cn.rst index 17b379f6295d66d864e2b53108012eff5895d96b..6900bb1443e611d326e8d5640e794ac2b9079beb 100644 --- a/doc/howto/cmd_parameter/index_cn.rst +++ b/doc/howto/cmd_parameter/index_cn.rst @@ -2,10 +2,25 @@ 命令行参数设置 =============== +深度学习算法的实现有着多样化的特点,运行环境、运行阶段、模型结构、训练策略等等这些都是常见的变化因素。PaddlePaddle支持用户灵活地设置各种命令行参数,以实现对模型训练或预测流程的控制。 + +在这一部分,首先以几个实际场景为例,展示了部分命令行参数的使用: .. toctree:: :maxdepth: 1 use_case_cn.md + +接着对所有参数的使用场合进行概述和分类: + +.. toctree:: + :maxdepth: 1 + arguments_cn.md + +最后给出细节描述,详细解释这些参数的属性和意义: + +.. toctree:: + :maxdepth: 1 + detail_introduction_cn.md diff --git a/paddle/fluid/framework/dim.h b/paddle/fluid/framework/dim.h index 58b75ba4b5a1973d97aedcea0f76681b625afa65..73f92fa389fa3a66a14ae60b8dbfbcae80485658 100644 --- a/paddle/fluid/framework/dim.h +++ b/paddle/fluid/framework/dim.h @@ -157,8 +157,7 @@ HOSTDEVICE int64_t& indexer<0>(Dim<0>& dim, int idx) { throw std::invalid_argument("Invalid index"); #else PADDLE_ASSERT(false); -#endif -#if (defined __CUDA_ARCH__) && (CUDA_VERSION < 8000) +#if CUDA_VERSION < 8000 // On CUDA versions previous to 8.0, only __shared__ variables // could be declared as static in the device code. int64_t head = 0; @@ -166,6 +165,7 @@ HOSTDEVICE int64_t& indexer<0>(Dim<0>& dim, int idx) { static int64_t head = 0; #endif return head; +#endif } template @@ -189,8 +189,7 @@ HOSTDEVICE int64_t indexer<0>(const Dim<0>& dim, int idx) { throw std::invalid_argument("Invalid index"); #else PADDLE_ASSERT(false); -#endif -#if (defined __CUDA_ARCH__) && (CUDA_VERSION < 8000) +#if CUDA_VERSION < 8000 // On CUDA versions previous to 8.0, only __shared__ variables // could be declared as static in the device code. int64_t head = 0; @@ -198,6 +197,7 @@ HOSTDEVICE int64_t indexer<0>(const Dim<0>& dim, int idx) { static int64_t head = 0; #endif return head; +#endif } } // namespace diff --git a/paddle/fluid/operators/mine_hard_examples_op.cc b/paddle/fluid/operators/mine_hard_examples_op.cc index b7e9f4e2248882103a3926c309d6fc455442c917..0e81d60878dce747b047abbe4641b71462373b2b 100644 --- a/paddle/fluid/operators/mine_hard_examples_op.cc +++ b/paddle/fluid/operators/mine_hard_examples_op.cc @@ -247,7 +247,7 @@ class MineHardExamplesOp : public framework::OperatorWithKernel { const framework::ExecutionContext& ctx) const override { return framework::OpKernelType( framework::ToDataType(ctx.Input("ClsLoss")->type()), - ctx.device_context()); + platform::CPUPlace()); } }; diff --git a/python/paddle/fluid/layers/detection.py b/python/paddle/fluid/layers/detection.py index a077c0ce334f13f52a74be745d65ad893130c803..d16b4dc3a482d657158636f0e3a2c1d07bae7d8c 100644 --- a/python/paddle/fluid/layers/detection.py +++ b/python/paddle/fluid/layers/detection.py @@ -54,11 +54,17 @@ def detection_output(loc, score_threshold=0.01, nms_eta=1.0): """ - **Detection Output Layer** + **Detection Output Layer for Single Shot Multibox Detector (SSD).** - This layer applies the NMS to the output of network and computes the - predict bounding box location. The output's shape of this layer could - be zero if there is no valid bounding box. + This operation is to get the detection results by performing following + two steps: + + 1. Decode input bounding box predictions according to the prior boxes. + 2. Get the final detection results by applying multi-class non maximum + suppression (NMS). + + Please note, this operation doesn't clip the final output bounding boxes + to the image window. Args: loc(Variable): A 3-D Tensor with shape [N, M, 4] represents the @@ -91,7 +97,15 @@ def detection_output(loc, nms_eta(float): The parameter for adaptive NMS. Returns: - The detected bounding boxes which are a Tensor. + Variable: The detection outputs is a LoDTensor with shape [No, 6]. + Each row has six values: [label, confidence, xmin, ymin, xmax, ymax]. + `No` is the total number of detections in this mini-batch. For each + instance, the offsets in first dimension are called LoD, the offset + number is N + 1, N is the batch size. The i-th image has + `LoD[i + 1] - LoD[i]` detected results, if it is 0, the i-th image + has no detected results. If all images have not detected results, + all the elements in LoD are 0, and output tensor only contains one + value, which is -1. Examples: .. code-block:: python diff --git a/python/paddle/fluid/layers/nn.py b/python/paddle/fluid/layers/nn.py index a10463b52c62039ff6d94b90f9b62135933d1ad9..e10a01a5d7cb562d892541edb61cc9249b428104 100644 --- a/python/paddle/fluid/layers/nn.py +++ b/python/paddle/fluid/layers/nn.py @@ -3206,7 +3206,7 @@ def one_hot(input, depth): operator. Args: - input(Tensor/LodTensor): A Tensor/LodTensor of indices, last dimension must be 1. + input(variable): A Tensor/LodTensor of indices, last dimension must be 1. depth(scalar): an interger defining the depth of the one hot dimension. Returns: diff --git a/python/paddle/fluid/layers/tensor.py b/python/paddle/fluid/layers/tensor.py index 8100e8f034fb5d6ca706d1408f448fa26193f282..da066c34bdeba1f1b76f8d1cafd9244b2f7708fa 100644 --- a/python/paddle/fluid/layers/tensor.py +++ b/python/paddle/fluid/layers/tensor.py @@ -362,3 +362,75 @@ def zeros(shape, dtype, force_cpu=False): data = fluid.layers.zeros(shape=[1], dtype='int64') """ return fill_constant(value=0.0, **locals()) + + +def save(x, file_path, overwrite=True): + """ + Saves a variable as a file. + + Args: + x(variable): The Tensor/LoDTensor to be saved. + file_path(str): The file path where the variable will be saved. + overwrite(bool): Whether or not cover the given file when it has already + existed. If it's set 'False' and the file is existed, a runtime + error will be thrown. + """ + helper = LayerHelper("save", **locals()) + helper.append_op( + type="save", + inputs={"input": x}, + outputs={}, + args={"file_path": file_path, + "overwrite": overwrite}) + + +def save_combine(x, file_path, overwrite=True): + """ + Saves a list of variables into a single file. + + Args: + x(list): A list of Tensor/LoDTensor to be saved together in a single file. + file_path(str): The file path where variables will be saved. + overwrite(bool): Whether or not cover the given file when it has already + existed. If it's set 'False' and the file is existed, a runtime + error will be thrown. + """ + helper = LayerHelper("save_combine", **locals()) + helper.append_op( + type="save_combine", + inputs={"input": x}, + outputs={}, + args={"file_path": file_path, + "overwrite": overwrite}) + + +def load(out, file_path): + """ + Loads a variable from a given file. + + Args: + out(variable): The variable to be read from the disk file. + file_path(str): The path of the disk file. + """ + helper = LayerHelper("load", **locals()) + helper.append_op( + type="load", + inputs={}, + output={"Out": out}, + args={"file_path": file_path}) + + +def load_combine(out, file_path): + """ + Loads a list of vairables from a single file. + + Args: + out(list): The list of variables to be read from the disk file. + file_path(str): The path of the disk file. + """ + helper = LayerHelper("load_combine", **locals()) + helper.append_op( + type="load_combine", + inputs={}, + output={"Out": out}, + args={"file_path": file_path})