未验证 提交 08fb079d 编写于 作者: L lilong12 提交者: GitHub

Fix the doc for shard_index api (#29183)

* update, test=develop
上级 058f1b22
develop 2.0.1-rocm-post Ligoml-patch-1 OliverLPH-patch-1 OliverLPH-patch-2 PaddlePM-patch-1 PaddlePM-patch-2 ZHUI-patch-1 add_default_att add_model_benchmark_ci add_some_yaml_config addfile all_new_design_exec ascendrc ascendrelease cherry_undefined_var compile_windows delete_2.0.1-rocm-post delete_add_default_att delete_all_new_design_exec delete_ascendrc delete_compile_windows delete_delete_addfile delete_disable_iterable_dataset_unittest delete_fix_dataloader_memory_leak delete_fix_imperative_dygraph_error delete_fix_retry_ci delete_fix_undefined_var delete_improve_sccache delete_paralleltest delete_prv-disable-more-cache delete_revert-31068-fix_conv3d_windows delete_revert-31562-mean delete_revert-33630-bug-fix delete_revert-34159-add_npu_bce_logical_dev delete_revert-34910-spinlocks_for_allocator delete_revert-35069-revert-34910-spinlocks_for_allocator delete_revert-36057-dev/read_flags_in_ut dingjiaweiww-patch-1 disable_iterable_dataset_unittest dy2static enable_eager_model_test final_state_gen_python_c final_state_intermediate fix-numpy-issue fix_concat_slice fix_dataloader_memory_leak fix_imperative_dygraph_error fix_npu_ci fix_op_flops fix_retry_ci fix_rnn_docs fix_tensor_type fix_undefined_var fixiscan fixiscan1 fixiscan2 fixiscan3 improve_sccache incubate/infrt inplace_addto make_flag_adding_easier move_embedding_to_phi move_histogram_to_pten move_sgd_to_phi move_slice_to_pten move_temporal_shift_to_phi move_yolo_box_to_phi npu_fix_alloc paralleltest preln_ernie prv-disable-more-cache prv-md-even-more prv-onednn-2.5 pten_tensor_refactor release/2.0 release/2.0-rc1 release/2.1 release/2.2 release/2.3 release/2.3-fc-ernie-fix release/2.4 revert-31068-fix_conv3d_windows revert-31562-mean revert-32290-develop-hardlabel revert-33037-forci revert-33475-fix_cifar_label_dimension revert-33630-bug-fix revert-34159-add_npu_bce_logical_dev revert-34406-add_copy_from_tensor revert-34910-spinlocks_for_allocator revert-35069-revert-34910-spinlocks_for_allocator revert-36057-dev/read_flags_in_ut revert-36201-refine_fast_threaded_ssa_graph_executor revert-36985-add_license revert-37318-refactor_dygraph_to_eager revert-37926-eager_coreops_500 revert-37956-revert-37727-pylayer_support_tuple revert-38100-mingdong revert-38301-allocation_rearrange_pr revert-38703-numpy_bf16_package_reupload revert-38732-remove_useless_header_in_elementwise_mul_grad revert-38959-Reduce_Grad revert-39143-adjust_empty revert-39227-move_trace_op_to_pten revert-39268-dev/remove_concat_fluid_kernel revert-40170-support_partial_grad revert-41056-revert-40727-move_some_activaion_to_phi revert-41065-revert-40993-mv_ele_floordiv_pow revert-41068-revert-40790-phi_new revert-41944-smaller_inference_api_test revert-42149-do-not-reset-default-stream-for-stream-safe-cuda-allocator revert-43155-fix_ut_tempfile revert-43882-revert-41944-smaller_inference_api_test revert-45808-phi/simplify_size_op revert-46827-deform_comment rocm_dev_0217 support_weight_transpose test_benchmark_ci test_model_benchmark test_model_benchmark_ci zhiqiu-patch-1 v2.4.0-rc0 v2.3.2 v2.3.1 v2.3.0 v2.3.0-rc0 v2.2.2 v2.2.1 v2.2.0 v2.2.0-rc0 v2.2.0-bak0 v2.1.3 v2.1.2 v2.1.1 v2.1.0 v2.1.0-rc0 v2.0.2 v2.0.1 v2.0.0 v2.0.0-rc1
无相关合并请求
...@@ -14696,9 +14696,10 @@ def deformable_roi_pooling(input, ...@@ -14696,9 +14696,10 @@ def deformable_roi_pooling(input,
return output return output
@deprecated(since="2.0.0", update_to="paddle.shard_index")
def shard_index(input, index_num, nshards, shard_id, ignore_value=-1): def shard_index(input, index_num, nshards, shard_id, ignore_value=-1):
""" """
This operator recomputes the `input` indices according to the offset of the Recompute the `input` indices according to the offset of the
shard. The length of the indices is evenly divided into N shards, and if shard. The length of the indices is evenly divided into N shards, and if
the `shard_id` matches the shard with the input index inside, the index is the `shard_id` matches the shard with the input index inside, the index is
recomputed on the basis of the shard offset, elsewise it is set to recomputed on the basis of the shard offset, elsewise it is set to
...@@ -14711,44 +14712,27 @@ def shard_index(input, index_num, nshards, shard_id, ignore_value=-1): ...@@ -14711,44 +14712,27 @@ def shard_index(input, index_num, nshards, shard_id, ignore_value=-1):
NOTE: If the length of indices cannot be evely divided by the shard number, NOTE: If the length of indices cannot be evely divided by the shard number,
the size of the last shard will be less than the calculated `shard_size` the size of the last shard will be less than the calculated `shard_size`
Examples:
::
Input:
X.shape = [4, 1]
X.data = [[1], [6], [12], [19]]
index_num = 20
nshards = 2
ignore_value = -1
if shard_id == 0, we get:
Out.shape = [4, 1]
Out.data = [[1], [6], [-1], [-1]]
if shard_id == 1, we get:
Out.shape = [4, 1]
Out.data = [[-1], [-1], [2], [9]]
Args: Args:
- **input** (Variable): Input indices, last dimension must be 1. input (Tensor): Input indices with data type int64. It's last dimension must be 1.
- **index_num** (scalar): An integer defining the range of the index. index_num (int): An integer defining the range of the index.
- **nshards** (scalar): The number of shards nshards (int): The number of shards.
- **shard_id** (scalar): The index of the current shard shard_id (int): The index of the current shard.
- **ignore_value** (scalar): An integer value out of sharded index range ignore_value (int): An integer value out of sharded index range.
Returns: Returns:
Variable: The sharded index of input. Tensor: The sharded index of input.
Examples: Examples:
.. code-block:: python .. code-block:: python
import paddle.fluid as fluid import paddle
batch_size = 32 label = paddle.to_tensor([[16], [1]], "int64")
label = fluid.data(name="label", shape=[batch_size, 1], dtype="int64") shard_label = paddle.shard_index(input=label,
shard_label = fluid.layers.shard_index(input=label, index_num=20,
index_num=20, nshards=2,
nshards=2, shard_id=0)
shard_id=0) print(shard_label)
# [[-1], [1]]
""" """
check_variable_and_dtype(input, 'input', ['int64'], 'shard_index') check_variable_and_dtype(input, 'input', ['int64'], 'shard_index')
op_type = 'shard_index' op_type = 'shard_index'
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
新手
引导
客服 返回
顶部