提交 d23c5417 编写于 作者: M mindspore-ci-bot 提交者: Gitee

!702 Update constraints_on_network_construction.md

Merge pull request !702 from fanglei/master
...@@ -93,7 +93,7 @@ ...@@ -93,7 +93,7 @@
| `**` |Scalar and `Tensor` | `**` |Scalar and `Tensor`
| `//` |Scalar and `Tensor` | `//` |Scalar and `Tensor`
| `%` |Scalar and `Tensor` | `%` |Scalar and `Tensor`
| `[]` |The operation object type can be `list`, `tuple`, or `Tensor`. Accessed multiple subscripts of lists and dictionaries can be used as r-values instead of l-values. Only when the operation object type is `tuple(nn.Cell)`, the index type can be Tensor. `tuple(nn.Cell)` means all elements type of tuple are `nn.Cell`. For details about access constraints for the tuple and Tensor types, see the description of slicing operations. | `[]` |The operation object type can be `list`, `tuple`, or `Tensor`. Accessed multiple subscripts of lists and dictionaries can be used as r-values instead of l-values. Only when the operation object type is tuple or list with element type `nn.Cell`, the index type can be Tensor. For details about access constraints for the tuple and Tensor types, see the description of slicing operations.
### Index operation ### Index operation
...@@ -145,7 +145,7 @@ The index operation includes `tuple` and` Tensor`. The following focuses on the ...@@ -145,7 +145,7 @@ The index operation includes `tuple` and` Tensor`. The following focuses on the
- Assignment: for example `tensor_x[..., ::, 1:] = u`. - Assignment: for example `tensor_x[..., ::, 1:] = u`.
- Not supported in other situations - Not supported in other situations
The slice value operations of the tuple type needs to focus on the slice value operation of the operation object type `tuple(nn.Cell)`. This operation is currently only supported by the GPU backend in Graph mode, and its syntax format is like `layers[index](*inputs)`, the example code is as follows: The index value operation of tuple and list type, we need to focus on the index value operation of tuple or list whose element type is `nn.Cell`. This operation is currently only supported by the GPU backend in Graph mode, and its syntax format is like `layers[index](*inputs)`, the example code is as follows:
```python ```python
class Net(nn.Cell): class Net(nn.Cell):
def __init__(self): def __init__(self):
...@@ -159,13 +159,42 @@ The slice value operations of the tuple type needs to focus on the slice value o ...@@ -159,13 +159,42 @@ The slice value operations of the tuple type needs to focus on the slice value o
return x return x
``` ```
The grammar has the following constraints: The grammar has the following constraints:
* Only supports slice value operation with operation object type `tuple(nn.Cell)`. * Only the index value operation of tuple or list whose element type is `nn.Cell` is supported.
* The data type of the index value needs to be a Tensor scalar of type `int32`. * The index is a scalar `Tensor` of type `int32`, with a value range of `[-n, n)`, where `n` is the size of the tuple, and the maximum supported tuple size is 1000.
* The value range of index value is `[-n, n)`, where `n` is the size of the tuple, and the maximum supported tuple size is 1000.
* The number, type and shape of the input data of the `Construct` function of each Cell element in the tuple are the same, and the number of data output after the `Construct` function runs, the type and shape are also the same. * The number, type and shape of the input data of the `Construct` function of each Cell element in the tuple are the same, and the number of data output after the `Construct` function runs, the type and shape are also the same.
* Each element in the tuple needs to be defined before the tuple is defined. * Each element in the tuple needs to be defined before the tuple is defined.
* This syntax does not support running branches as if, while, for and other control flow, except if the control condition of the control flow is constant. for example:
- Supported example:
```python
class Net(nn.Cell):
def __init__(self, flag=True):
super(Net, self).__init__()
self.flag = flag
self.relu = nn.ReLU()
self.softmax = nn.Softmax()
self.layers = (self.relu, self.softmax)
def construct(self, x, index):
if self.flag:
x = self.layers[index](x)
return x
```
- Unsupported example:
```python
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.relu = nn.ReLU()
self.softmax = nn.Softmax()
self.layers = (self.relu, self.softmax)
def construct(self, x, index, flag):
if flag:
x = self.layers[index](x)
return x
```
Other types of tuple also support slice value operations, but do not support index type as Tensor, support `tuple_x [start: stop: step]`, which has the same effect as Python, and will not be repeated here. Tuple also support slice value operations, but do not support slice type as Tensor, support `tuple_x [start: stop: step]`, which has the same effect as Python, and will not be repeated here.
### Unsupported Syntax ### Unsupported Syntax
......
...@@ -93,7 +93,7 @@ ...@@ -93,7 +93,7 @@
| `**` |标量、`Tensor` | `**` |标量、`Tensor`
| `//` |标量、`Tensor` | `//` |标量、`Tensor`
| `%` |标量、`Tensor` | `%` |标量、`Tensor`
| `[]` |操作对象类型支持`list``tuple``Tensor`,支持多重下标访问作为右值,但不支持多重下标访问作为左值,且索引类型仅当操作对象类型为`tuple(nn.Cell)`的取值操作时支持Tensor(这个操作目前Graph模式下仅GPU后端支持,其中`tuple(nn.Cell)`是指元素类型为nn.Cell的tuple类型);Tuple、Tensor类型访问限制见切片操作中的说明。 | `[]` |操作对象类型支持`list``tuple``Tensor`,支持多重下标访问作为右值,但不支持多重下标访问作为左值,且索引类型仅当操作对象类型为元素类型为`nn.Cell`的tuple或list的取值操作时支持Tensor(这个操作目前Graph模式下仅GPU后端支持);Tuple、Tensor类型访问限制见切片操作中的说明。
### 索引操作 ### 索引操作
...@@ -144,7 +144,7 @@ ...@@ -144,7 +144,7 @@
- 赋值:例如`tensor_x[..., ::, 1:]=u` - 赋值:例如`tensor_x[..., ::, 1:]=u`
- 其他情况暂不支持 - 其他情况暂不支持
tuple类型的切片取值操作,需要重点介绍一下操作对象类型为`tuple(nn.Cell)`的切片取值操作,该操作目前在Graph模式下仅GPU后端支持运行,其语法格式形如`layers[index](*inputs)`,具体示例代码如下: tuple和list类型的索引取值操作,需要重点介绍一下元素类型为`nn.Cell`的tuple或list的索引取值操作,该操作目前在Graph模式下仅GPU后端支持运行,其语法格式形如`layers[index](*inputs)`,具体示例代码如下:
```python ```python
class Net(nn.Cell): class Net(nn.Cell):
def __init__(self): def __init__(self):
...@@ -158,13 +158,42 @@ tuple类型的切片取值操作,需要重点介绍一下操作对象类型为 ...@@ -158,13 +158,42 @@ tuple类型的切片取值操作,需要重点介绍一下操作对象类型为
return x return x
``` ```
同时该语法有以下几个约束: 同时该语法有以下几个约束:
* 只支持操作对象类型为`tuple(nn.Cell)`的切片取值操作。 * 只支持元素类型为`nn.Cell`的tuple或list的索引取值操作。
* 索引值index的数据类型需要是`int32`类型的Tensor标量。 * 索引值index的类型为`int32`的Tensor标量,取值范围为`[-n, n)`, 其中`n`为tuple的size,支持的tuple的size的最大值为1000。
* 索引值index的取值范围为`[-n, n)`, 其中`n`为tuple的size,支持的tuple的size的最大值为1000。
* tuple中的每个Cell元素的Construct函数的输入数据的数目,类型和shape要求相同,且Construct函数运行后输出的数据的数目,类型和shape也要求相同。 * tuple中的每个Cell元素的Construct函数的输入数据的数目,类型和shape要求相同,且Construct函数运行后输出的数据的数目,类型和shape也要求相同。
* tuple中的每个Cell元素,需要在tuple定义之前完成定义。 * tuple中的每个Cell元素,需要在tuple定义之前完成定义。
* 该语法不支持做为if、while、for等控制流的运行分支,如果控制流的控制条件为常量除外。举例说明:
- 支持的写法:
```python
class Net(nn.Cell):
def __init__(self, flag=True):
super(Net, self).__init__()
self.flag = flag
self.relu = nn.ReLU()
self.softmax = nn.Softmax()
self.layers = (self.relu, self.softmax)
def construct(self, x, index):
if self.flag:
x = self.layers[index](x)
return x
```
- 不支持的写法:
```python
class Net(nn.Cell):
def __init__(self):
super(Net, self).__init__()
self.relu = nn.ReLU()
self.softmax = nn.Softmax()
self.layers = (self.relu, self.softmax)
def construct(self, x, index, flag):
if flag:
x = self.layers[index](x)
return x
```
其它类型的tuple也支持切片取值操作, 但不支持索引类型为Tensor类型,支持`tuple_x[start:stop:step]`,其中操作对象为与Python的效果相同,这里不再赘述。 tuple也支持切片取值操作, 但不支持切片类型为Tensor类型,支持`tuple_x[start:stop:step]`,其中操作对象为与Python的效果相同,这里不再赘述。
### 不支持的语法 ### 不支持的语法
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册