README.md 4.7 KB
Newer Older
X
Xin Pan 已提交
1 2
# Overview

X
Xin Pan 已提交
3
Imperative Programming is easier to learn, debug and try new ideas.
X
Xin Pan 已提交
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

# Related Works

## Pytorch
https://pytorch.org/

## TensorFlow Eager
https://www.tensorflow.org/guide/eager

# Design

## API
```python
class Layer(object):

  def __call__(inputs):
    # build some parameter once.
    # ...
    return self.apply(inputs):

X
polish  
Xin Pan 已提交
24
  def forward(inputs):
X
Xin Pan 已提交
25 26 27 28 29 30 31 32 33 34 35 36
    # forward logic with paddle operators. backward auto-generated.


class PyLayer(core.PyLayer):

  def __call__(cls, inputs):
    # trace the logic.

  @staticmethod
  def forward(inputs):
    # any forward logic implemented with numpy io.

X
polish  
Xin Pan 已提交
37 38
  @staticmethod
  def backward(inputs):
X
Xin Pan 已提交
39
    # any backward logic implemented with numpy io.
X
Xin Pan 已提交
40

X
Xin Pan 已提交
41 42 43 44 45
```


## Tracer

X
Xin Pan 已提交
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
Current: Python Variable -> C++ VarBase -> C++ Variable -> C++ Tensor

Longer term.
```python

# Parent class.
class PyVarBase(object):
  pass

# Current python variable.
class Variable(PyVarBase):
  pass

class IVariable(PyVarBase):
  def __init__(self):
    self._ivar = core.VarBase()

X
Xin Pan 已提交
63
  # Move var to a device.
X
Xin Pan 已提交
64
  def to(device): pass
X
Xin Pan 已提交
65
  # Get var value.
X
Xin Pan 已提交
66
  def value(): pass
X
Xin Pan 已提交
67
  # Trigger backward.
X
Xin Pan 已提交
68
  def backward(): pass
X
Xin Pan 已提交
69
  # Get var's gradient value.
X
Xin Pan 已提交
70 71 72 73
  def gradient_value(): pass
  # operators to override.
```

X
Xin Pan 已提交
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92


```cpp
class Tracer {
 public:
  explicit Tracer(framework::BlockDesc* root_block) : root_block_(root_block) {}

  virtual ~Tracer() {}

  void Trace(OpBase* op,
             const std::map<std::string, std::vector<VarBase*>>& inputs,
             const std::map<std::string, std::vector<VarBase*>>& outputs,
             framework::BlockDesc* block, const bool stop_gradient = false);

  std::vector<VarBase*> PyTrace(OpBase* op, const std::vector<VarBase*>& inputs,
                                bool stop_gradient = false);
};
```

X
Xin Pan 已提交
93
* Trace forward operations
X
add doc  
Xin Pan 已提交
94
* Perform quick shape/type infer, push kernel execution engine and return to user.
X
Xin Pan 已提交
95 96 97 98
* Perform autograd to generate gradients.
* Clear trace.
* Apply gradients with optimizers

X
Xin Pan 已提交
99 100 101 102
## Autodiff

Lots of research already.
https://autodiff-workshop.github.io/
X
Xin Pan 已提交
103 104
https://en.wikipedia.org/wiki/Automatic_differentiation

X
Xin Pan 已提交
105 106
Basically, trace the forward execution, and perform autodiff
when needed.
X
Xin Pan 已提交
107

X
Xin Pan 已提交
108 109 110
* Can be triggered by `backward()`.
* Can select a block of code to trace and autodiff.
* Use `require_grad` to drop some forward subgraph that doesn't need autodiff.
X
Xin Pan 已提交
111

X
Xin Pan 已提交
112
## Execution Engine
X
Xin Pan 已提交
113

X
Xin Pan 已提交
114
Lazy execution of pushed C++ operations.
X
Xin Pan 已提交
115

X
add doc  
Xin Pan 已提交
116 117 118 119 120 121 122 123 124 125 126 127 128 129
## Device Placement

* Operator executes on the inputs' device.
* All inputs should live on the same device.
* use `Var.to()` to explicitly move var to a device.

## Save/Load Models

TODO

## I/O

TODO

X
Xin Pan 已提交
130 131 132
## Refactor

* All function layers with parameters converted to class Layers.
X
Xin Pan 已提交
133 134
* Existing models converted to imperative mode.
* All op tests run once in static graph, once in imperative mode.
X
Xin Pan 已提交
135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162

# Examples

```python
class MyLayer(fluid.imperative.Layer):
    def __init__(self):
        super(MyLayer, self).__init__()

    def forward(self, inputs):
        x = fluid.layers.relu(inputs)
        x = fluid.layers.elementwise_mul(x, x)
        x = fluid.layers.reduce_sum(x)
        return [x]


class MyPyLayer(fluid.imperative.PyLayer):
    def __init__(self):
        super(MyPyLayer, self).__init__()

    @staticmethod
    def forward(inputs):
        return np.tanh(inputs[0])

    @staticmethod
    def backward(inputs):
        return np.array(dout) * (1 - np.square(np.array(out)))


X
Xin Pan 已提交
163 164 165 166 167 168 169 170 171
np_inp = np.ones([2, 2], np.float32)
with fluid.imperative.guard():
    my_py_layer = MyPyLayer()
    outs = my_py_layer(np_inp)
    dy_out = np.sum(outs[0]._numpy())
    outs[0]._backward()
    dy_grad = var_inp._gradient()


X
Xin Pan 已提交
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205
class MLP(fluid.imperative.Layer):
    def __init__(self):
        super(MLP, self).__init__()
        self._fc1 = FC(3,
                       fluid.ParamAttr(
                           initializer=fluid.initializer.Constant(value=0.1)))
        self._fc2 = FC(4,
                       fluid.ParamAttr(
                           initializer=fluid.initializer.Constant(value=0.1)))

    def forward(self, inputs):
        x = self._fc1(inputs)
        x = self._fc2(x)
        x = fluid.layers.reduce_sum(x)
        return x


 np_inp = np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32)
 with fluid.imperative.guard():
     var_inp = fluid.imperative.base.to_variable(np_inp)
     mlp = MLP()
     out = mlp(var_inp)
     dy_out = out._numpy()
     out._backward()
```

# Plan

2.1,3 fulltime, Can run a few simple models. (Currently, 2 20% engs)

4.1, 4 fulltime, Can run 6 models, Performance 70% Pytorch. Release alpha.

6.1, 5 fulltime, Performance close to Pytorch, can run multi-devices. Release Beta.

X
Xin Pan 已提交
206
8.1, 5 fulltime, Works in general. Update existing models. Can compile to static graph, support more optimizations.
X
Xin Pan 已提交
207

X
Xin Pan 已提交
208
12.1 Done.
X
polish  
Xin Pan 已提交
209

X
Xin Pan 已提交
210 211 212
# Discussion

TODO.