the infer interface in V2 API has some problems.
Created by: lcy-seso
the infer
interface in V2 API has some problems which are VERY UNREASONABLE in use:
-
One problem as reported in this issue: https://github.com/PaddlePaddle/Paddle/issues/2170.
- the
infer
interface always flattens the prediction results given by one or multiple layer (if multiple layers are specified as outputs of the network) by concatenating them: https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/inference.py#L86 - Such a concatenation will fail if multiple output layers have a different height which is a very common situation, for example, If I want to output both a sequence layer and a non-sequence layer. The former one has a variable length, then the concatenation will fail.
- the
-
by using the
infer
interface, users have to choose afield
to output, the available choices arevalue
,field
,ids
.- this line https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/inference.py#L70 iterate over the specified fields for all the chosen output layers.
- for one layer user may want multiple fields as outputs, such as in NMT.
- but if multiple layers are specified as outputs of the network, a user may want different fields for different layers.
- This will cause a bug in the
infer
interface because the current implementation ofinfer
interface constraints different output layers must output the same fields.
@reyoung I will fix this because I find I cannot bypass it, but I just want you to know the problem!