提交 d493b198 编写于 作者: Y Yi Wang 提交者: GitHub

Merge pull request #223 from helinwang/fixes

fix book examples
......@@ -170,6 +170,8 @@ We must import and initialize PaddlePaddle (enable/disable GPU, set the number o
```python
import sys
import paddle.v2 as paddle
from vgg import vgg_bn_drop
from resnet import resnet_cifar10
# PaddlePaddle init
paddle.init(use_gpu=False, trainer_count=1)
......@@ -243,7 +245,9 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela
The above VGG network extracts high-level features and maps them to a vector of the same size as the categories. Softmax function or classifier is then used for calculating the probability of the image belonging to each category.
```python
out = fc_layer(input=net, size=class_num, act=SoftmaxActivation())
out = paddle.layer.fc(input=net,
size=classdim,
act=paddle.activation.Softmax())
```
4. Define Loss Function and Outputs
......@@ -261,7 +265,7 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela
The first, third and fourth steps of a ResNet are the same as a VGG. The second one is the main module.
```python
net = resnet_cifar10(data, depth=32)
net = resnet_cifar10(image, depth=56)
```
Here are some basic functions used in `resnet_cifar10`:
......
......@@ -212,6 +212,8 @@ We must import and initialize PaddlePaddle (enable/disable GPU, set the number o
```python
import sys
import paddle.v2 as paddle
from vgg import vgg_bn_drop
from resnet import resnet_cifar10
# PaddlePaddle init
paddle.init(use_gpu=False, trainer_count=1)
......@@ -285,7 +287,9 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela
The above VGG network extracts high-level features and maps them to a vector of the same size as the categories. Softmax function or classifier is then used for calculating the probability of the image belonging to each category.
```python
out = fc_layer(input=net, size=class_num, act=SoftmaxActivation())
out = paddle.layer.fc(input=net,
size=classdim,
act=paddle.activation.Softmax())
```
4. Define Loss Function and Outputs
......@@ -303,7 +307,7 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela
The first, third and fourth steps of a ResNet are the same as a VGG. The second one is the main module.
```python
net = resnet_cifar10(data, depth=32)
net = resnet_cifar10(image, depth=56)
```
Here are some basic functions used in `resnet_cifar10`:
......
......@@ -83,11 +83,6 @@ We use the [MovieLens ml-1m](http://files.grouplens.org/datasets/movielens/ml-1m
`paddle.v2.datasets` package encapsulates multiple public datasets, including `cifar`, `imdb`, `mnist`, `moivelens` and `wmt14`, etc. There's no need for us to manually download and preprocess `MovieLens` dataset.
```python
# Run this block to show dataset's documentation
help(paddle.v2.dataset.movielens)
```
The raw `MoiveLens` contains movie ratings, relevant features from both movies and users.
For instance, one movie's feature could be:
......@@ -182,7 +177,6 @@ from IPython import display
import cPickle
import paddle.v2 as paddle
paddle.init(use_gpu=False)
```
......@@ -296,9 +290,9 @@ trainer = paddle.trainer.SGD(cost=cost, parameters=parameters,
`paddle.dataset.movielens.train` will yield records during each pass, after shuffling, a batch input is generated for training.
```python
reader=paddle.reader.batch(
reader=paddle.batch(
paddle.reader.shuffle(
paddle.dataset.movielens.trai(), buf_size=8192),
paddle.dataset.movielens.train(), buf_size=8192),
batch_size=256)
```
......
......@@ -404,7 +404,7 @@ feature = user.value() + movie.value()
infer_dict = copy.copy(feeding)
del infer_dict['score']
prediction = paddle.infer(output=inference, parameters=parameters, input=[feature], feeding=infer_dict)
prediction = paddle.infer(inference, parameters=parameters, input=[feature], feeding=infer_dict)
score = (prediction[0][0] + 5.0) / 2
print "[Predict] User %d Rating Movie %d With Score %.2f"%(user_id, movie_id, score)
```
......
......@@ -125,11 +125,6 @@ We use the [MovieLens ml-1m](http://files.grouplens.org/datasets/movielens/ml-1m
`paddle.v2.datasets` package encapsulates multiple public datasets, including `cifar`, `imdb`, `mnist`, `moivelens` and `wmt14`, etc. There's no need for us to manually download and preprocess `MovieLens` dataset.
```python
# Run this block to show dataset's documentation
help(paddle.v2.dataset.movielens)
```
The raw `MoiveLens` contains movie ratings, relevant features from both movies and users.
For instance, one movie's feature could be:
......@@ -224,7 +219,6 @@ from IPython import display
import cPickle
import paddle.v2 as paddle
paddle.init(use_gpu=False)
```
......@@ -338,9 +332,9 @@ trainer = paddle.trainer.SGD(cost=cost, parameters=parameters,
`paddle.dataset.movielens.train` will yield records during each pass, after shuffling, a batch input is generated for training.
```python
reader=paddle.reader.batch(
reader=paddle.batch(
paddle.reader.shuffle(
paddle.dataset.movielens.trai(), buf_size=8192),
paddle.dataset.movielens.train(), buf_size=8192),
batch_size=256)
```
......
......@@ -446,7 +446,7 @@ feature = user.value() + movie.value()
infer_dict = copy.copy(feeding)
del infer_dict['score']
prediction = paddle.infer(output=inference, parameters=parameters, input=[feature], feeding=infer_dict)
prediction = paddle.infer(inference, parameters=parameters, input=[feature], feeding=infer_dict)
score = (prediction[0][0] + 5.0) / 2
print "[Predict] User %d Rating Movie %d With Score %.2f"%(user_id, movie_id, score)
```
......
......@@ -344,7 +344,7 @@ Finally, we can invoke `trainer.train` to start training:
trainer.train(
reader=train_reader,
event_handler=event_handler,
feeding=feedig,
feeding=feeding,
num_passes=10)
```
......
......@@ -386,7 +386,7 @@ Finally, we can invoke `trainer.train` to start training:
trainer.train(
reader=train_reader,
event_handler=event_handler,
feeding=feedig,
feeding=feeding,
num_passes=10)
```
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册