StaticInputV2 and GeneratedInputV2 missing in paddle.v2.layer
Created by: alvations
From the MT chapter in the DL101 book and the machine translation demo with the new python API, paddle.layer.StaticInputV2
and paddle.layer.GeneratedInputV2
throws an AttributeError, e.g.:
AttributeError Traceback (most recent call last)
<ipython-input-5-5ce86945bbbe> in <module>()
----> 1 cost = seqToseq_net(source_dict_dim, target_dict_dim)
/Users/liling.tan/seqtoseq.py in seqToseq_net(source_dict_dim, target_dict_dim, is_generating)
160
161 decoder_group_name = "decoder_group"
--> 162 group_input1 = paddle.layer.StaticInputV2(input=encoded_vector, is_seq=True)
163 group_input2 = StaticInputV2(input=encoded_proj, is_seq=True)
164 group_inputs = [group_input1, group_input2]
AttributeError: 'module' object has no attribute 'StaticInputV2'
A closer look, they're missing from the top level paddle.layer.__init__.py
:
>>> import paddle.v2 as paddle
>>> dir(paddle.layer)
['AggregateLevel', 'BaseGeneratedInput', 'ExpandLevel', 'GeneratedInput', 'LayerOutput', 'ModelConfig', 'SubModelConfig', 'SubsequenceInput', '__all__', '__builtins__', '__convert_name__', '__convert_to_v2__', '__data_layer__', '__doc__', '__file__', '__get_used_evaluators__', '__get_used_layers__', '__get_used_parameters__', '__get_used_submodels__', '__map_data_docstr__', '__name__', '__need_to_keep__', '__need_to_wrap__', '__package__', '__trim_submodel__', 'addto', 'batch_norm', 'beam_search', 'bilinear_interp', 'block_expand', 'classification_cost', 'collections', 'concat', 'config_base', 'context_projection', 'conv_operator', 'conv_projection', 'conv_shift', 'convex_comb', 'copy', 'cos_sim', 'cp', 'crf', 'crf_decoding', 'cross_channel_norm', 'cross_entropy_cost', 'cross_entropy_with_selfnorm_cost', 'ctc', 'data', 'dotmul_operator', 'dotmul_projection', 'embedding', 'eos', 'expand', 'fc', 'first_seq', 'full_matrix_projection', 'get_layer', 'get_output', 'gru_step', 'gru_step_naive', 'grumemory', 'hsigmoid', 'huber_cost', 'identity_projection', 'img_cmrnorm', 'img_conv', 'img_pool', 'interpolation', 'lambda_cost', 'last_seq', 'linear_comb', 'lstm_step', 'lstmemory', 'max_id', 'maxout', 'memory', 'mixed', 'mse_cost', 'multi_binary_label_cross_entropy_cost', 'multiplex', 'name', 'nce', 'new_name', 'obj', 'out_prod', 'pad', 'parse_network', 'pooling', 'power', 'print', 'priorbox', 'rank_cost', 're', 'recurrent', 'recurrent_group', 'regression_cost', 'repeat', 'rotate', 'sampling_id', 'scaling', 'scaling_projection', 'selective_fc', 'seq_concat', 'seq_reshape', 'slope_intercept', 'smooth_l1_cost', 'spp', 'sum_cost', 'sum_to_one_norm', 'table_projection', 'tensor', 'trans', 'trans_full_matrix_projection', 'v1_layers', 'warp_ctc']
>>> 'StaticInputV2' in dir(paddle.layer)
False
>>> 'GeneratedInputV2' in dir(paddle.layer)
False
Currently the ad-hoc solution is to use the old paddle.trainer_config_helpers
, e.g.
>>> from paddle.trainer_config_helpers import GeneratedInput
>>> from paddle.trainer_config_helpers import StaticInput
This is from version 0.10.0 of the Python API.
$ pip show paddle
Name: paddle
Version: 0.10.0
Summary: Parallel Distributed Deep Learning
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /usr/local/lib/python2.7/site-packages
Requires: rarfile, matplotlib, protobuf, opencv-python, numpy, requests
$ paddle version
PaddlePaddle 0.10.0, compiled with
with_avx: ON
with_gpu: OFF
with_double: OFF
with_python: ON
with_rdma: OFF
with_timer: OFF