diff --git a/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md b/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md index fd1a93d2764234ec6b91cfae0e07fc6f2fd9bd95..5769bbfb6333a74959eaafeedaef67ddf855be26 100644 --- a/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md +++ b/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md @@ -26,6 +26,8 @@ By default, MindSpore is in PyNative mode. You can switch it to the graph mode b In PyNative mode, single operators, common functions, network inference, and separated gradient calculation can be executed. The following describes the usage and precautions. +> In PyNative mode, operators are executed asynchronously on the device to improve performance. Therefore, when an error occurs during operator excution, the error information may be displayed after the program is executed. + ## Executing a Single Operator Execute a single operator and output the result, as shown in the following example. @@ -243,6 +245,7 @@ print(z.asnumpy()) [ 0.0377498 -0.06117418 0.00546303]]]] ``` + ## Debugging Network Train Model In PyNative mode, the gradient can be calculated separately. As shown in the following example, `GradOperation` is used to calculate all input gradients of the function or the network. Note that the inputs have to be Tensor. diff --git a/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md b/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md index 349b8c85031d37131a78df419323b6635418d101..4289a3fd501f8247312f7c62f2d5fd4b34e3d7bb 100644 --- a/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md +++ b/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md @@ -28,6 +28,8 @@ MindSpore支持两种运行模式,在调试或者运行方面做了不同的 PyNative模式下,支持执行单算子、普通函数和网络,以及单独求梯度的操作。下面将详细介绍使用方法和注意事项。 +> PyNative模式下为了提升性能,算子在device上使用了异步执行方式,因此在算子执行错误的时候,错误信息可能会在程序执行到最后才显示。 + ## 执行单算子 执行单个算子,并打印相关结果,如下例所示。 @@ -245,6 +247,7 @@ print(z.asnumpy()) [ 0.0377498 -0.06117418 0.00546303]]]] ``` + ## 调试网络训练模型 PyNative模式下,还可以支持单独求梯度的操作。如下例所示,可通过`GradOperation`求该函数或者网络所有的输入梯度。需要注意,输入类型仅支持Tensor。