Why paddle fluid layers use function instead of class?
Created by: dongfangyixi
I notice that in paddle/fluid/layers/nn.py layers are defined as functions take fc as example:
def fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, is_test=False, name=None):
helper = LayerHelper("fc", **locals())
dtype = helper.input_dtype()
mul_results = []
for input_var, param_attr in helper.iter_inputs_and_params():
input_shape = input_var.shape
param_shape = [
reduce(lambda a, b: a * b, input_shape[num_flatten_dims:], 1)
] + [size]
w = helper.create_parameter(
attr=param_attr, shape=param_shape, dtype=dtype, is_bias=False)
tmp = helper.create_variable_for_type_inference(dtype)
helper.append_op(
type="mul",
inputs={"X": input_var,
"Y": w},
outputs={"Out": tmp},
attrs={"x_num_col_dims": num_flatten_dims,
"y_num_col_dims": 1})
mul_results.append(tmp)
if len(mul_results) == 1:
pre_bias = mul_results[0]
else:
pre_bias = helper.create_variable_for_type_inference(dtype)
helper.append_op(
type="sum",
inputs={"X": mul_results},
outputs={"Out": pre_bias},
attrs={"use_mkldnn": False})
# add bias
pre_activation = helper.append_bias_op(pre_bias, dim_start=num_flatten_dims)
# add activation
return helper.append_activation(pre_activation)
it create weights in the function. however, people may reuse weights in their model, in this case, one can not reuse the layer just call it when they want. Current, i call helper to create weights in my own class, and maintain it for further use. It's there any solutions for sharing the weights?
Not very sure, but based on my understanding, If the layers is defined as classes in paddle like tensorflow or pytorch, it would be good for building the model with high level interfaces.