diff --git a/image_classification/README.en.md b/image_classification/README.en.md index 68e4977c8b23863d79956bd0523cd5016fa3eef3..fd7dff484a95ef06f06b7e7ced8598d32c687f84 100644 --- a/image_classification/README.en.md +++ b/image_classification/README.en.md @@ -170,6 +170,8 @@ We must import and initialize PaddlePaddle (enable/disable GPU, set the number o ```python import sys import paddle.v2 as paddle +from vgg import vgg_bn_drop +from resnet import resnet_cifar10 # PaddlePaddle init paddle.init(use_gpu=False, trainer_count=1) @@ -243,7 +245,9 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela The above VGG network extracts high-level features and maps them to a vector of the same size as the categories. Softmax function or classifier is then used for calculating the probability of the image belonging to each category. ```python - out = fc_layer(input=net, size=class_num, act=SoftmaxActivation()) + out = paddle.layer.fc(input=net, + size=classdim, + act=paddle.activation.Softmax()) ``` 4. Define Loss Function and Outputs @@ -261,7 +265,7 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela The first, third and fourth steps of a ResNet are the same as a VGG. The second one is the main module. ```python -net = resnet_cifar10(data, depth=32) +net = resnet_cifar10(image, depth=56) ``` Here are some basic functions used in `resnet_cifar10`: diff --git a/image_classification/index.en.html b/image_classification/index.en.html index 759f1149ae9b62d26506d4bd30e22e2c2e578f4f..7435c524f0cae50b15957cb15b8ff07e22c32188 100644 --- a/image_classification/index.en.html +++ b/image_classification/index.en.html @@ -212,6 +212,8 @@ We must import and initialize PaddlePaddle (enable/disable GPU, set the number o ```python import sys import paddle.v2 as paddle +from vgg import vgg_bn_drop +from resnet import resnet_cifar10 # PaddlePaddle init paddle.init(use_gpu=False, trainer_count=1) @@ -285,7 +287,9 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela The above VGG network extracts high-level features and maps them to a vector of the same size as the categories. Softmax function or classifier is then used for calculating the probability of the image belonging to each category. ```python - out = fc_layer(input=net, size=class_num, act=SoftmaxActivation()) + out = paddle.layer.fc(input=net, + size=classdim, + act=paddle.activation.Softmax()) ``` 4. Define Loss Function and Outputs @@ -303,7 +307,7 @@ First, we use a VGG network. Since the image size and amount of CIFAR10 are rela The first, third and fourth steps of a ResNet are the same as a VGG. The second one is the main module. ```python -net = resnet_cifar10(data, depth=32) +net = resnet_cifar10(image, depth=56) ``` Here are some basic functions used in `resnet_cifar10`: diff --git a/recommender_system/README.en.md b/recommender_system/README.en.md index d9b925193c7935087ca649abf190595754dd7d47..9a1b5571cee03b926ac02dcc1d0f3920281db7f7 100644 --- a/recommender_system/README.en.md +++ b/recommender_system/README.en.md @@ -83,11 +83,6 @@ We use the [MovieLens ml-1m](http://files.grouplens.org/datasets/movielens/ml-1m `paddle.v2.datasets` package encapsulates multiple public datasets, including `cifar`, `imdb`, `mnist`, `moivelens` and `wmt14`, etc. There's no need for us to manually download and preprocess `MovieLens` dataset. -```python -# Run this block to show dataset's documentation -help(paddle.v2.dataset.movielens) -``` - The raw `MoiveLens` contains movie ratings, relevant features from both movies and users. For instance, one movie's feature could be: @@ -182,7 +177,6 @@ from IPython import display import cPickle import paddle.v2 as paddle - paddle.init(use_gpu=False) ``` @@ -296,9 +290,9 @@ trainer = paddle.trainer.SGD(cost=cost, parameters=parameters, `paddle.dataset.movielens.train` will yield records during each pass, after shuffling, a batch input is generated for training. ```python -reader=paddle.reader.batch( +reader=paddle.batch( paddle.reader.shuffle( - paddle.dataset.movielens.trai(), buf_size=8192), + paddle.dataset.movielens.train(), buf_size=8192), batch_size=256) ``` diff --git a/recommender_system/README.md b/recommender_system/README.md index ac10f79eb7640c52160408afdf8a35461712cd9a..7ee55048784a8af070adb4efd94b0509a6d75638 100644 --- a/recommender_system/README.md +++ b/recommender_system/README.md @@ -404,7 +404,7 @@ feature = user.value() + movie.value() infer_dict = copy.copy(feeding) del infer_dict['score'] -prediction = paddle.infer(output=inference, parameters=parameters, input=[feature], feeding=infer_dict) +prediction = paddle.infer(inference, parameters=parameters, input=[feature], feeding=infer_dict) score = (prediction[0][0] + 5.0) / 2 print "[Predict] User %d Rating Movie %d With Score %.2f"%(user_id, movie_id, score) ``` diff --git a/recommender_system/index.en.html b/recommender_system/index.en.html index 46b4682750597d26be6486a3f7763dc77c171b48..819ae07cec931028ed264007d9af97b99f7b5335 100644 --- a/recommender_system/index.en.html +++ b/recommender_system/index.en.html @@ -125,11 +125,6 @@ We use the [MovieLens ml-1m](http://files.grouplens.org/datasets/movielens/ml-1m `paddle.v2.datasets` package encapsulates multiple public datasets, including `cifar`, `imdb`, `mnist`, `moivelens` and `wmt14`, etc. There's no need for us to manually download and preprocess `MovieLens` dataset. -```python -# Run this block to show dataset's documentation -help(paddle.v2.dataset.movielens) -``` - The raw `MoiveLens` contains movie ratings, relevant features from both movies and users. For instance, one movie's feature could be: @@ -224,7 +219,6 @@ from IPython import display import cPickle import paddle.v2 as paddle - paddle.init(use_gpu=False) ``` @@ -338,9 +332,9 @@ trainer = paddle.trainer.SGD(cost=cost, parameters=parameters, `paddle.dataset.movielens.train` will yield records during each pass, after shuffling, a batch input is generated for training. ```python -reader=paddle.reader.batch( +reader=paddle.batch( paddle.reader.shuffle( - paddle.dataset.movielens.trai(), buf_size=8192), + paddle.dataset.movielens.train(), buf_size=8192), batch_size=256) ``` diff --git a/recommender_system/index.html b/recommender_system/index.html index 8a0f013c366db827ef9de77baee6a9de6f1c6da6..97df04a45fec4ada58c15074c30d08bd2b911bc0 100644 --- a/recommender_system/index.html +++ b/recommender_system/index.html @@ -446,7 +446,7 @@ feature = user.value() + movie.value() infer_dict = copy.copy(feeding) del infer_dict['score'] -prediction = paddle.infer(output=inference, parameters=parameters, input=[feature], feeding=infer_dict) +prediction = paddle.infer(inference, parameters=parameters, input=[feature], feeding=infer_dict) score = (prediction[0][0] + 5.0) / 2 print "[Predict] User %d Rating Movie %d With Score %.2f"%(user_id, movie_id, score) ``` diff --git a/understand_sentiment/README.en.md b/understand_sentiment/README.en.md index 105a851b3537cb100d3d47a4c47e1b2c06f1e685..dfca945c975ed86571a590284116626b52c1d76c 100644 --- a/understand_sentiment/README.en.md +++ b/understand_sentiment/README.en.md @@ -344,7 +344,7 @@ Finally, we can invoke `trainer.train` to start training: trainer.train( reader=train_reader, event_handler=event_handler, - feeding=feedig, + feeding=feeding, num_passes=10) ``` diff --git a/understand_sentiment/index.en.html b/understand_sentiment/index.en.html index 1a99aa90a600167484c081da68813e02775dc156..14d87b0da2bd0b0c1ed4f12148efbec5c77ca3f5 100644 --- a/understand_sentiment/index.en.html +++ b/understand_sentiment/index.en.html @@ -386,7 +386,7 @@ Finally, we can invoke `trainer.train` to start training: trainer.train( reader=train_reader, event_handler=event_handler, - feeding=feedig, + feeding=feeding, num_passes=10) ```