未验证 提交 60d5a912 编写于 作者: G gouzil 提交者: GitHub

[docs] add ipustrategy Hyperlink (#46422)

* [docs] add ipustrategy Hyperlink

* fix ipu_shard_guard docs; test=document_fix

* [docs] add set_ipu_shard note

* [docs] fix hyperlink

* update framework.py

* fix mlu_places docs; test=document_fix

* fix put_along_axis docs; test=document_fix

* fix flake8 W293 error, test=document_fix

* fix typo in typing, test=document_fix
Co-authored-by: NLigoml <39876205+Ligoml@users.noreply.github.com>
Co-authored-by: NNyakku Shigure <sigure.qaq@gmail.com>
上级 e3407a80
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved. # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
...@@ -284,11 +284,11 @@ def ipu_shard_guard(index=-1, stage=-1): ...@@ -284,11 +284,11 @@ def ipu_shard_guard(index=-1, stage=-1):
The sharded model will be computed from small to large. The default value is -1, The sharded model will be computed from small to large. The default value is -1,
which means no pipelining computation order and run Ops in terms of graph. which means no pipelining computation order and run Ops in terms of graph.
**Note**: Note:
Only if the enable_manual_shard=True, the 'index' is able to be set not -1. Please refer Only if the enable_manual_shard=True, the 'index' is able to be set not -1. Please refer
to :code:`paddle.static.IpuStrategy` . to :ref:`api_paddle_static_IpuStrategy`.
Only if the enable_pipelining=True, the 'stage' is able to be set not -1. Please refer Only if the enable_pipelining=True, the 'stage' is able to be set not -1. Please refer
to :code:`paddle.static.IpuStrategy` . to :ref:`api_paddle_static_IpuStrategy`.
A index is allowed to match none stage or a stage. A stage is only allowed to match a new or A index is allowed to match none stage or a stage. A stage is only allowed to match a new or
duplicated index. duplicated index.
...@@ -329,6 +329,11 @@ def set_ipu_shard(call_func, index=-1, stage=-1): ...@@ -329,6 +329,11 @@ def set_ipu_shard(call_func, index=-1, stage=-1):
""" """
Shard the ipu with the given call function. Set every ops in call function to the given ipu sharding. Shard the ipu with the given call function. Set every ops in call function to the given ipu sharding.
Note:
Only when enable_manual_shard=True to set the index to a value other than -1. please refer to :ref:`api_paddle_static_IpuStrategy` .
Only when enable_pipelining=True to set stage to a value other than -1. please refer to :ref:`api_paddle_static_IpuStrategy` .
An index supports a corresponding None stage or a stage, and a stage only supports a new index or a duplicate index.
Args: Args:
call_func(Layer|function): Specify the call function to be wrapped. call_func(Layer|function): Specify the call function to be wrapped.
index(int, optional): Specify which ipu the Tensor is computed on, (such as ‘0, 1, 2, 3’). index(int, optional): Specify which ipu the Tensor is computed on, (such as ‘0, 1, 2, 3’).
...@@ -340,7 +345,6 @@ def set_ipu_shard(call_func, index=-1, stage=-1): ...@@ -340,7 +345,6 @@ def set_ipu_shard(call_func, index=-1, stage=-1):
Returns: Returns:
The wrapped call function. The wrapped call function.
Examples: Examples:
.. code-block:: python .. code-block:: python
...@@ -1002,8 +1006,6 @@ def cuda_pinned_places(device_count=None): ...@@ -1002,8 +1006,6 @@ def cuda_pinned_places(device_count=None):
def mlu_places(device_ids=None): def mlu_places(device_ids=None):
""" """
**Note**:
For multi-card tasks, please use `FLAGS_selected_mlus` environment variable to set the visible MLU device.
This function creates a list of :code:`paddle.device.MLUPlace` objects. This function creates a list of :code:`paddle.device.MLUPlace` objects.
If :code:`device_ids` is None, environment variable of If :code:`device_ids` is None, environment variable of
:code:`FLAGS_selected_mlus` would be checked first. For example, if :code:`FLAGS_selected_mlus` would be checked first. For example, if
...@@ -1016,6 +1018,9 @@ def mlu_places(device_ids=None): ...@@ -1016,6 +1018,9 @@ def mlu_places(device_ids=None):
the returned list would be the returned list would be
[paddle.device.MLUPlace(0), paddle.device.MLUPlace(1), paddle.device.MLUPlace(2)]. [paddle.device.MLUPlace(0), paddle.device.MLUPlace(1), paddle.device.MLUPlace(2)].
Note:
For multi-card tasks, please use `FLAGS_selected_mlus` environment variable to set the visible MLU device.
Parameters: Parameters:
device_ids (list or tuple of int, optional): list of MLU device ids. device_ids (list or tuple of int, optional): list of MLU device ids.
......
...@@ -4327,8 +4327,9 @@ def put_along_axis(arr, indices, values, axis, reduce='assign'): ...@@ -4327,8 +4327,9 @@ def put_along_axis(arr, indices, values, axis, reduce='assign'):
indices (Tensor) : Indices to put along each 1d slice of arr. This must match the dimension of arr, indices (Tensor) : Indices to put along each 1d slice of arr. This must match the dimension of arr,
and need to broadcast against arr. Supported data type are int and int64. and need to broadcast against arr. Supported data type are int and int64.
axis (int) : The axis to put 1d slices along. axis (int) : The axis to put 1d slices along.
reduce (string | optinal) : The reduce operation, default is 'assign', support 'add', 'assign', 'mul' and 'multiply'. reduce (str, optional): The reduce operation, default is 'assign', support 'add', 'assign', 'mul' and 'multiply'.
Returns :
Returns:
Tensor: The indexed element, same dtype with arr Tensor: The indexed element, same dtype with arr
Examples: Examples:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册