Created by: Superjomn
We will have more tensor types like:
- Tensor
- LODTensor
- SparseTensor
- SparseLOTTensor (maybe in the future)
Some operator(FC e.g.) just process Tensor, when an LODTensor
passed in as an input, FC assumes that the outputs are all Tensor
s, but the framework should copy LOD info from inputs to outputs transparently, and make outputs LODTensors so that the LOD info will not be dropped.
So, a type deducer is needed, it will get an operator's input tensor types and attributes, and deduce the outputs' tensor types.
We treat the type deducer a container of various deduce rules, and the type-deducer will only be used in OperatorBase.InferShape
when the operators first visit the outputs.
The new tensor-types will be free to add, with new deduce rules to help deduce the tensor types, for example:
- LODTensor -> Tensor
- SparseTensor -> Tensor
A real whole deduce example:
-
RNN->FC
, FC will get more than oneLODTensor
s, and FC assumes the outputs areTensors
- when FC needs output, it calls
auto output1 = TypeDeducer<Tensor>(input_vars, op_attrs, var_name)
- and get a
Tensor*
, butTypeDeducer
will make this output aLODTensor
in fact, and return theLODTensor.get_tensor()
- when FC needs output, it calls
This implementation is just a demo of the idea, and far from the actual usage.
Ignore the implementation of LODTensor
, it is not finished and has another PR.