diff --git a/demo/keras/TUTORIAL_EN.md b/demo/keras/TUTORIAL_EN.md
new file mode 100644
index 0000000000000000000000000000000000000000..79c8bf60850c3c4efadb7bae2b99d5bc2396f6e0
--- /dev/null
+++ b/demo/keras/TUTORIAL_EN.md
@@ -0,0 +1,90 @@
+# How to use VisualDL in Keras
+
+Here we will show you how to use VisualDL with Keras so that you can visualize the training process of Keras.
+We will use the Keras Convolution Neural Network to train the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset as an example.
+
+The training program comes from the official GitHub [Example](https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py) of Keras.
+We just need to create the VisualDL data collection loggers in the code
+
+```python
+# create VisualDL logger
+logdir = "/workspace"
+logger = LogWriter(logdir, sync_cycle=100)
+
+# mark the components with 'train' label.
+with logger.mode("train"):
+ # create a scalar component called 'scalars/'
+ scalar_keras_train_loss = logger.scalar(
+ "scalars/scalar_keras_train_loss")
+ image_input = logger.image("images/input", 1)
+ image0 = logger.image("images/image0", 1)
+ image1 = logger.image("images/image1", 1)
+ histogram0 = logger.histogram("histogram/histogram0", num_buckets=50)
+ histogram1 = logger.histogram("histogram/histogram1", num_buckets=50)
+
+```
+
+Then we can insert our data loggers in the callback function provided by Keras.
+
+```python
+train_step = 0
+
+class LossHistory(keras.callbacks.Callback):
+ def on_train_begin(self, logs={}):
+ self.losses = []
+
+ def on_batch_end(self, batch, logs={}):
+ global train_step
+
+ # Scalar
+ scalar_keras_train_loss.add_record(train_step, logs.get('loss'))
+
+ # get weights for 2 layers
+ W0 = model.layers[0].get_weights()[0] # 3 x 3 x 1 x 32
+ W1 = model.layers[1].get_weights()[0] # 3 x 3 x 32 x 64
+
+ weight_array0 = W0.flatten()
+ weight_array1 = W1.flatten()
+
+ # histogram
+ histogram0.add_record(train_step, weight_array0)
+ histogram1.add_record(train_step, weight_array1)
+
+ # image
+ image_input.start_sampling()
+ image_input.add_sample([28, 28], x_train[0].flatten())
+ image_input.finish_sampling()
+
+ image0.start_sampling()
+ image0.add_sample([9, 32], weight_array0)
+ image0.finish_sampling()
+
+ image1.start_sampling()
+ image1.add_sample([288, 64], weight_array1)
+ image1.finish_sampling()
+
+ train_step += 1
+ self.losses.append(logs.get('loss'))
+```
+
+After the training, the visual results of each component are as follows:
+
+The scalar diagram of the error is as follows:
+
+
+
+
+
+The input picture and the first, second layer convolution weight after the training are as follows:
+
+
+
+
+
+The histograms of the training parameters is as follows:
+
+
+
+
+
+The full demonstration code can be downloaded in [here](./keras_mnist_demo.py).
\ No newline at end of file
diff --git a/demo/mxnet/TUTORIAL_CN.md b/demo/mxnet/TUTORIAL_CN.md
index ae1e8bd895a2e83ae5f3346a152591cba614bab0..b108b678487c0d43a4aba6aaeacdcbe7aa5efa29 100644
--- a/demo/mxnet/TUTORIAL_CN.md
+++ b/demo/mxnet/TUTORIAL_CN.md
@@ -9,7 +9,7 @@
## 安装MXNet
-请按照MXNet的[官方网站](https://mxnet.incubator.apache.org/get_started/install.html)来安装MXNet,并验证安装成功。
+请按照MXNet的[官方网站](https://mxnet.incubator.apache.org/install/index.html)来安装MXNet,并验证安装成功。
>>> import mxnet as mx
@@ -29,7 +29,7 @@ pip install --upgrade dist/visualdl-*.whl
## 开始编写训练MNIST的程序
-我们为您提供了一个演示程序 [mxnet_demo.py](./mxnet_demo.py)。里面展示了如何下载MNIST数据集以及编写MXNet程序来进行CNN的训练。MXNet的部分借鉴了MXNet[官方入门文件](https://mxnet.incubator.apache.org/tutorials/python/mnist.html)
+我们为您提供了一个演示程序 [mxnet_demo.py](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/mxnet/mxnet_demo.py)。里面展示了如何下载MNIST数据集以及编写MXNet程序来进行CNN的训练。MXNet的部分借鉴了MXNet[官方入门文件](https://mxnet.incubator.apache.org/tutorials/python/mnist.html)
为了嵌入VisualDL程序,以便在MXNet训练时进行检测,我们需要声明一个LogWriter实例:
```python
@@ -63,7 +63,7 @@ lenet_model.fit(train_iter,
## 用VisualDL展示模型图
-VisualDL的一个优点是能可视化深度学习模型,帮助用户更直观的了解模型的构成,都有哪些操作,哪些输入等等。VisualDL的模型图支持原生态的PaddlePaddle格式以及普遍适用的ONNX格式。在这里用户可以使用MXNet训练模型,然后用 [ONNX-MXNet] (https://github.com/onnx/onnx-mxnet) 工具将其转换成 ONNX 格式,然后进行可视化。
+VisualDL的一个优点是能可视化深度学习模型,帮助用户更直观的了解模型的构成,都有哪些操作,哪些输入等等。VisualDL的模型图支持原生态的PaddlePaddle格式以及普遍适用的ONNX格式。在这里用户可以使用MXNet训练模型,然后用 [ONNX-MXNet](https://github.com/onnx/onnx-mxnet) 工具将其转换成 ONNX 格式,然后进行可视化。
我们这里使用已经从MXNet转换到ONNX的现成模型 [Super_Resolution model](https://s3.amazonaws.com/onnx-mxnet/examples/super_resolution.onnx)
VisualDL的使用很简单,在完成安装后只需要把模型文件(protobuf格式)用参数 -m 提供给VisualDL即可。
@@ -74,6 +74,6 @@ visualDL --logdir=/workspace -m /workspace/super_resolution_mnist.onnx --port=88
模型图的效果如下:
-
+
-生成的完整效果图可以在[这里](./super_resolution_graph.png)下载。
+生成的完整效果图可以在[这里](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/mxnet/super_resolution_graph.png)下载。
diff --git a/demo/mxnet/TUTORIAL_EN.md b/demo/mxnet/TUTORIAL_EN.md
new file mode 100644
index 0000000000000000000000000000000000000000..64e38369946011b3ae55af374487943c1e2738c8
--- /dev/null
+++ b/demo/mxnet/TUTORIAL_EN.md
@@ -0,0 +1,84 @@
+# How to use VisualDL in MXNet
+
+Here we will show you how to use VisualDL in MXNet so that you can visualize the training process of MXNet.
+We will use the MXNet Convolution Neural Network to train the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset as an example.
+
+## Install MXNet
+Please install MXNet according to MXNet's [official website](https://mxnet.incubator.apache.org/install/index.html)
+and verify that the installation is successful.
+
+ >>> import mxnet as mx
+ >>> a = mx.nd.ones((2, 3))
+ >>> b = a * 2 + 1
+ >>> b.asnumpy()
+ array([[ 3., 3., 3.],
+ [ 3., 3., 3.]], dtype=float32)
+
+## Install VisualDL
+The installation of VisualDL is very simple. Please install it according to the [official website](https://github.com/PaddlePaddle/VisualDL) of VisualDL.
+Only two steps are required.
+
+```
+python setup.py bdist_wheel
+pip install --upgrade dist/visualdl-*.whl
+```
+
+## Start writing the program for training MNIST
+
+We have provided you with a demonstration program [mxnet_demo.py](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/mxnet/mxnet_demo.py).
+It shows how to download MNIST data sets and write a MXNet programs for CNN training.
+The training program is based on the [MXNet tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html)
+
+We need to create a VisualDL LogWriter instance to record MXNet training:
+
+```python
+logger = LogWriter(logdir, sync_cycle=30)
+```
+
+The logger instance contains three modules, Scalar, Image, and Histogram. Here we use the Scalar module:
+
+```python
+scalar0 = logger.scalar("scalars/scalar0")
+```
+
+The Tag can contain '/' in order to create a different namespace for complex model.
+
+MXNet provides a lot of [API](https://mxnet.incubator.apache.org/api/python/index.html) in the fit function.
+We insert our callback function `add_scalar` into the corresponding API
+
+```python
+lenet_model.fit(train_iter,
+ eval_data=val_iter,
+ optimizer='sgd',
+ optimizer_params={'learning_rate':0.1},
+ eval_metric='acc',
+ # Here we embed our custom callback function
+ batch_end_callback=[add_scalar()],
+ num_epoch=2)
+```
+
+That's all. In the training process of MXNet, our callback function is called to record the accuracy at the end of each training batch.
+The rate of accuracy will continue to rise until more than 95%.
+The following is the accuracy of the two epochs:
+
+
+
+## Display the model graph with VisualDL
+
+VisualDL helps users understand the composition of the model more intuitively by visualizing deep learning models.
+VisualDL can visualize ONNX format models, which is widely supported.
+Users may use MXNet to train the model, then convert it into ONNX format with [ONNX-MXNet](https://github.com/onnx/onnx-mxnet) tool, and then visualize it.
+
+Here we use the existing model that has been transformed from MXNet to ONNX, [Super_Resolution model](https://s3.amazonaws.com/onnx-mxnet/examples/super_resolution.onnx).
+
+To display the model graph via VisualDL, pass the model file path with the parameter -m to the VisualDL
+
+```bash
+visualDL --logdir=/workspace -m /workspace/super_resolution_mnist.onnx --port=8888
+```
+
+The model graph is as follows:
+
+
+
+You can download the full size image [here](https://github.com/PaddlePaddle/VisualDL/blob/develop/demo/mxnet/super_resolution_graph.png).
diff --git a/demo/pytorch/TUTORIAL_EN.md b/demo/pytorch/TUTORIAL_EN.md
new file mode 100644
index 0000000000000000000000000000000000000000..7801b604f712a3e31c97e52ba9458df362ddaf78
--- /dev/null
+++ b/demo/pytorch/TUTORIAL_EN.md
@@ -0,0 +1,210 @@
+# How to use VisualDL in PyTorch
+
+Here we will show you how to use VisualDL in PyTorch so that you can visualize the training process of PyTorch.
+We will use the PyTorch Convolution Neural Network to train the [Cifar10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset as an example.
+
+The training program comes from the [PyTorch Tutorial](http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html).
+
+```python
+import torch
+import torchvision
+import torchvision.transforms as transforms
+from torch.autograd import Variable
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.optim as optim
+
+import matplotlib
+matplotlib.use('Agg')
+
+from visualdl import LogWriter
+
+
+transform = transforms.Compose(
+ [transforms.ToTensor(),
+ transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
+
+trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
+ download=True, transform=transform)
+trainloader = torch.utils.data.DataLoader(trainset, batch_size=500,
+ shuffle=True, num_workers=2)
+
+testset = torchvision.datasets.CIFAR10(root='./data', train=False,
+ download=True, transform=transform)
+testloader = torch.utils.data.DataLoader(testset, batch_size=500,
+ shuffle=False, num_workers=2)
+
+classes = ('plane', 'car', 'bird', 'cat',
+ 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
+
+
+import matplotlib.pyplot as plt
+import numpy as np
+
+
+# functions to show an image
+def imshow(img):
+ img = img / 2 + 0.5 # unnormalize
+ npimg = img.numpy()
+ fig, ax = plt.subplots()
+ plt.imshow(np.transpose(npimg, (1, 2, 0)))
+ # we can either show the image or save it locally
+ # plt.show()
+ fig.savefig('out' + str(np.random.randint(0, 10000)) + '.pdf')
+```
+
+We can preview the Cifar10 picture set to be analyzed:
+
+
+
+
+
+We just need to create the VisualDL data collection loggers in the code
+
+```python
+logdir = "/workspace"
+logger = LogWriter(logdir, sync_cycle=100)
+
+# mark the components with 'train' label.
+with logger.mode("train"):
+ # create a scalar component called 'scalars/'
+ scalar_pytorch_train_loss = logger.scalar("scalars/scalar_pytorch_train_loss")
+ image1 = logger.image("images/image1", 1)
+ image2 = logger.image("images/image2", 1)
+ histogram0 = logger.histogram("histogram/histogram0", num_buckets=100)
+```
+
+There are 50000 training images and 10000 test images in Cifar10. We set the training set size to 500,
+and picture sampling rate to 500. The training set (batch) dimension is:
+500 x 3 x 32 x 32
+
+Then we start to build the CNN model
+
+```python
+# get some random training images
+dataiter = iter(trainloader)
+images, labels = dataiter.next()
+
+# show images
+imshow(torchvision.utils.make_grid(images))
+# print labels
+print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
+
+# Define a Convolution Neural Network
+class Net(nn.Module):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.conv1 = nn.Conv2d(3, 6, 5)
+ self.pool = nn.MaxPool2d(2, 2)
+ self.conv2 = nn.Conv2d(6, 16, 5)
+ self.fc1 = nn.Linear(16 * 5 * 5, 120)
+ self.fc2 = nn.Linear(120, 84)
+ self.fc3 = nn.Linear(84, 10)
+
+ def forward(self, x):
+ x = self.pool(F.relu(self.conv1(x)))
+ x = self.pool(F.relu(self.conv2(x)))
+ x = x.view(-1, 16 * 5 * 5)
+ x = F.relu(self.fc1(x))
+ x = F.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
+
+net = Net()
+```
+
+Then we start to train and use VisualDL to collect data at the same time
+
+```python
+# Train the network
+for epoch in range(5): # loop over the dataset multiple times
+ running_loss = 0.0
+ for i, data in enumerate(trainloader, 0):
+ # get the inputs
+ inputs, labels = data
+
+ # wrap them in Variable
+ inputs, labels = Variable(inputs), Variable(labels)
+
+ # zero the parameter gradients
+ optimizer.zero_grad()
+
+ # forward + backward + optimize
+ outputs = net(inputs)
+ loss = criterion(outputs, labels)
+
+ loss.backward()
+ optimizer.step()
+
+ # use VisualDL to retrieve metrics
+ # scalar
+ scalar_pytorch_train_loss.add_record(train_step, float(loss))
+
+ # histogram
+ weight_list = net.conv1.weight.view(6*3*5*5, -1)
+ histogram0.add_record(train_step, weight_list)
+
+ # image
+ image1.start_sampling()
+ image1.add_sample([96, 25], net.conv2.weight.view(16*6*5*5, -1))
+ image1.finish_sampling()
+
+ image2.start_sampling()
+ image2.add_sample([18, 25], net.conv1.weight.view(6*3*5*5, -1))
+ image2.finish_sampling()
+
+
+ train_step += 1
+
+ # print statistics
+ running_loss += loss.data[0]
+ if i % 2000 == 1999: # print every 2000 mini-batches
+ print('[%d, %5d] loss: %.3f' %
+ (epoch + 1, i + 1, running_loss / 2000))
+ running_loss = 0.0
+
+print('Finished Training')
+```
+
+PyTorch support ONNX standard and it can export its model into ONNX.
+PyTorch runs a single round of inference to trace the graph. We use a dummy input to run the model to produce the ONNX model
+
+```python
+import torch.onnx
+dummy_input = Variable(torch.randn(4, 3, 32, 32))
+torch.onnx.export(net, dummy_input, "pytorch_cifar10.onnx")
+
+print('Done')
+```
+
+After the training, the visual results of each component are as follows:
+
+The scalar diagram of the error is as follows:
+
+
+
+
+
+The picture of the first, second layer convolution weight after the training are as follows:
+
+
+
+
+
+
+The histograms of the training parameters is as follows:
+
+
+
+
+
+
+The model graph is as follows:
+
+
+
+
+
+
+You can download the full size image [here](https://github.com/daming-lu/large_files/blob/master/pytorch_demo_figs/graph.png?Raw=true).