提交 28dd9163 编写于 作者: P peterzhang2029

replace argparse with click and update example date

上级 9c955cc0
......@@ -135,44 +135,52 @@ def train_reader(data_dir, word_dict):
`train.py`训练脚本中包含以下参数:
```
--train_data_dir TRAIN_DATA_DIR
path of training dataset (default: None). if this
parameter is not set, imdb dataset will be used.
--test_data_dir TEST_DATA_DIR
path of testing dataset (default: None). if this
parameter is not set, imdb dataset will be used.
--word_dict WORD_DICT
path of word dictionary (default: None).if this
parameter is not set, imdb dataset will be used.if
this parameter is set, but the file does not exist,
word dictionay will be built from the training data
automatically.
--class_num CLASS_NUM
class number.
--batch_size BATCH_SIZE
the number of training examples in one
forward/backward pass
--num_passes NUM_PASSES
number of passes to train
--model_save_dir MODEL_SAVE_DIR
path to save the trained models.
Options:
--train_data_dir TEXT path of training dataset (default: None). if this
parameter is not set, imdb dataset will be used.
--test_data_dir TEXT path of testing dataset (default: None). if this
parameter is not set, imdb dataset will be used.
--word_dict_path TEXT path of word dictionary (default: None).if this
parameter is not set, imdb dataset will be used.if
this parameter is set, but the file does not exist,
word dictionay will be built from the training data
automatically.
--class_num INTEGER class number (default: 2).
--batch_size INTEGER the number of training examples in one batch
(default: 32).
--num_passes INTEGER number of passes to train (default: 10).
--model_save_dir TEXT path to save the trained models (default: 'models').
--help Show this message and exit.
```
修改`train.py`脚本中的启动参数,可以直接运行本例。 以`data`目录下的示例数据为例,在终端执行:
```bash
python train.py --train_data_dir 'data/train_data' --test_data_dir 'data/test_data' --word_dict 'dict.txt'
python train.py --train_data_dir 'data/train_data' --test_data_dir 'data/test_data' --word_dict_path 'dict.txt'
```
即可对样例数据进行训练。
### 预测
1.修改 `infer.py` 中以下变量,指定使用的模型、指定测试数据。
1.指定命令行参数
```python
model_path = "models/params_pass_00000.tar.gz" # 指定模型所在的路径
assert os.path.exists(model_path), "the trained model does not exist."
infer_path = 'data/infer.txt' # 指定测试文件所在的目录
word_dict = 'dict.txt' # 指定字典所在的路径
`infer.py`训练脚本中包含以下参数:
```
Options:
--data_path TEXT path of data for inference (default: None). if this
parameter is not set, imdb test dataset will be used.
--model_path TEXT path of saved model. (default:
'models/params_pass_00000.tar.gz')
--word_dict_path TEXT path of word dictionary (default: None).if this
parameter is not set, imdb dataset will be used.
--class_num INTEGER class number (default: 2).
--batch_size INTEGER the number of examples in one batch (default: 32).
--help Show this message and exit.
```
2.`data`目录下的示例数据为例,在终端执行:
```bash
python infer.py --data_path 'data/infer.txt' --word_dict_path 'dict.txt'
```
2.在终端中执行 `python infer.py`
即可对样例数据进行预测
At this point it seems almost unnecessary to state that Jon Bon Jovi delivers a firm, strong, seamless performance as Derek Bliss. His capability as an actor has been previously established by his critical acclaim garnered in other films (The Leading Man, No Looking Back). But, in case anyone is still wondering, yes, Jon Bon Jovi can act. He can act well and that's come to be expected of him. It's easy to separate Derek from the guy who belts out hits on VH-1.<br /><br />I generally would not watch a horror movie. I've come to expect them to focus on sensationalistic gore rather than dialogue and plot. What pleased me most about this film was that there really was a viable plot being moved along. The gore is not so much as to become the focus of the film and does not have a disturbingly realistic quality of films with higher technical effects budgets. So, gore fans might be disappointed, but story fans will not.<br /><br />Unlike an action film like U-571 where the dialogue takes a back seat to the bombast, we get a chance to know "the good guys" and actually care what happens to them. A few scenes are left unexplained (like Derek's hallucinations) but you get the feeling certain aspects were as they were to lay the foundation for a sequel. Unfortunately, with the lack of interest shown by Hollywood in this film, that sequel will never happen. These few instances are forgiveable knowing that Vampires could have been a continuing series.<br /><br />Is this the best film I've ever seen in my life? No. Is it a good way to spend about two hours being entertained? Yes. It won't leave the person who fears horror movies with insomnia and it won't leave the horror movie lover completely disappointed either. If you're somewhere in between the horror genre loather and the horror genre lover, this film is for you. It reaches a happy medium with the effects and story balancing each other.<br /><br />
The original Vampires (1998) is one of my favorites. I was curious to see how a sequel would work considering they used none of the original characters. I was quite surprised at how this played out. As a rule, sequels are never as good as the original, with a few exceptions. Though this one was not a great movie, the writer did well in keeping the main themes & vampire lore from the first one in tact. Jon Bon Jovi was a drawback initially, but he proved to be a half-way decent Slayer. I doubt anyone could top James Wood's performance in the first one, though. unless you bring in Buffy!<br /><br />All in all, this was a decent watch & I would watch it again.<br /><br />I was left with two questions, though... what happened to Jack Crow & how did Derek Bliss come to be a slayer? Guess we'll just have to leave that to imagination.
The movie opens with a flashback to Doddsville County High School on April Fool's Day. A group of students play a prank on class nerd Marty. When they are punished for playing said prank, they follow up with a bigger prank which (par for the course in slasher films involving pranks on class nerds) goes ridiculously awry leaving Marty simultaneously burned by fire and disfigured by acid for the sake of being thorough. Fast forward five years, where we find members of the student body gathering at the now abandoned high school for their five year class reunion. We find out that it is no coincidence that everyone at the reunion belonged to the clique of pranksters from the flashback scene, as all of the attendees are being stalked and killed by a mysterious, jester mask-clad murderer in increasingly complicated and mind-numbingly ludicrous fashions. It doesn't take Sherlock Holmes to solve the mystery of the killer's identity, as it is revealed to be none other than a scarred Marty who has seemingly been using his nerd rage and high intellect to bend the laws of physics and engineering in order to rig the school for his revenge scenario. The film takes a turn for the bizarre as Marty finishes exacting his revenge on his former tormentors, only to be haunted by their ghosts. Marty is finally pushed fully over the edge and takes his own life. Finally, the film explodes in a crescendo of disjointed weirdness as the whole revenge scenario is revealed to be a dream in the first place as Marty wakes up in a hospital bed, breaks free of his restraints, stabs a nurse, and finally disfigures his own face.<br /><br />The script is tired and suffers from a terminal case of horror movie logic. The only originality comes from the mind-numbingly convoluted ways that the victims are dispatched. The absurd it-was-all-a-dream ending feels tacked on. It's almost as if someone pointed out the disjointed nature of the film and the writer decided then and there that it was a dream.<br /><br />Technically speaking, the film is atrocious. Some scenes were filmed so dark that I had to pause the film and play with the color on my television. The acting is sub-par, even for slasher films. I can't help but think that casting was a part of the problem as all of the actors look at least five years older than the characters they portray, which makes the flashback scene even more unintentionally laughable. Their lack of commitment to the movie is made obvious as half of them can't bother to keep their accents straight through the movie.<br /><br />All of this being said, if you like bad horror movies, you might like this one, too. It isn't the worst film of the genre, but it's far from the best.
Robert Taylor definitely showed himself to be a fine dramatic actor in his role as a gun-slinging buffalo hunter in this 1956 western. It was one of the few times that Taylor would play a heavy in a film. Nonetheless, this picture was far from great as shortly after this, Taylor fled to television with the successful series The Detectives.<br /><br />Stuart Granger hid his British accent and turned in a formidable performance as Taylor's partner. <br /><br />Taylor is a bigot here and his hatred for the Indians really shows.<br /><br />Another very good performance here was by veteran actor Lloyd Nolan as an aged, drinking old-timer who joined in the hunt for buffalo as well. In his early scenes, Nolan was really doing an excellent take-off of Walter Huston in his Oscar-winning role in The Treasure of the Sierre Madre in 1948. Note the appearance of Russ Tamblyn in the film. The following year Tamblyn and Nolan would join in the phenomenal Peyton Place.<br /><br />The writing in the film is stiff at best. By the film's end, it's the elements of nature that did Taylor in. How about the elements of the writing here?
\ No newline at end of file
I was overtaken by the emotion. Unforgettable rendering of a wartime story which is unknown to most people. The performances were faultless and outstanding.
The original Vampires (1998) is one of my favorites. I was curious to see how a sequel would work considering they used none of the original characters. I was quite surprised at how this played out.
Without question, the worst ELVIS film ever made. The movie portrays all Indians as drunk, stupid, and lazy. Watch ELVIS's skin change color throughout the film.
I thought this movie was hysterical. I have watched it many times and recommend it highly. Mel Brooks, was excellent. The cast was fantastic..I don't understand how this movie gets a 2 out of 5 rating. I loved it.
\ No newline at end of file
1 I liked the film. Some of the action scenes were very interesting, tense and well done. I especially liked the opening scene which had a semi truck in it. A very tense action scene that seemed well done.<br /><br />Some of the transitional scenes were filmed in interesting ways such as time lapse photography, unusual colors, or interesting angles. Also the film is funny is several parts. I also liked how the evil guy was portrayed too. I'd give the film an 8 out of 10.
0 The plot for Descent, if it actually can be called a plot, has two noteworthy events. One near the beginning - one at the end. Together these events make up maybe 5% of the total movie time. Everything (and I mean _everything_) in between is basically the director's desperate effort to fill in the minutes. I like disturbing movies, I like dark movies and I don't get troubled by gritty scenes - but if you expect me to sit through 60 minutes of hazy/dark (literally) scenes with NO storyline you have another thing coming. Rosario Dawson, one of my favorite actresses is completely wasted here. And no, she doesn't get naked, not even in the NC-17 version, which I saw.<br /><br />If you have a couple of hours to throw away and want to watch "Descent", take a nap instead - you'll probably have more interesting dreams.
0 This film lacked something I couldn't put my finger on at first: charisma on the part of the leading actress. This inevitably translated to lack of chemistry when she shared the screen with her leading man. Even the romantic scenes came across as being merely the actors at play. It could very well have been the director who miscalculated what he needed from the actors. I just don't know.<br /><br />But could it have been the screenplay? Just exactly who was the chef in love with? He seemed more enamored of his culinary skills and restaurant, and ultimately of himself and his youthful exploits, than of anybody or anything else. He never convinced me he was in love with the princess.<br /><br />I was disappointed in this movie. But, don't forget it was nominated for an Oscar, so judge for yourself.
0 I read the book a long time back and don't specifically remember the plot but do remember that I enjoyed it. Since I'm home sick on the couch it seemed like a good idea and Hey !! It is a Lifetime movie.<br /><br />The movie is populated with grade B actors and actresses.<br /><br />The female cast is right out of Desperate Housewives. I've never seen the show but there are lots of commercials for the show and I get the gist. Is there nothing original anymore? Sure, but not on Lifetime.<br /><br />The male cast are all fairly effeminate looking and acting but the girls need to have husbands I suppose.<br /><br />In one scene a female is struggling with a male, for her life, and what does she do??? Kicks him in the testicles. What else? Women love that but let me tell you girls something. It's not as easy as it's always made to look.<br /><br />It wasn't all bad. I did get the chills a time or two so I have to credit someone with that.
1 I liked the film. Some of the action scenes were very interesting, tense and well done. I especially liked the opening scene which had a semi truck in it. Also the film is funny is several parts. I'd give the film an 8 out of 10.
0 The plot for Descent, if it actually can be called a plot, has two noteworthy events. One near the beginning - one at the end. Together these events make up maybe 5% of the total movie time. Everything (and I mean _everything_) in between is basically the director's desperate effort to fill in the minutes.
0 This film lacked something I couldn't put my finger on at first: charisma on the part of the leading actress. This inevitably translated to lack of chemistry when she shared the screen with her leading man. Even the romantic scenes came across as being merely the actors at play.
0 I read the book a long time back and don't specifically remember the plot but do remember that I enjoyed it. Since I'm home sick on the couch it seemed like a good idea and Hey !! It is a Lifetime movie.<br /><br />The movie is populated with grade B actors and actresses.<br /><br />The female cast is right out of Desperate Housewives.
\ No newline at end of file
0 I admit that I am a vampire addict: I have seen so many vampire movies I have lost count and this one is definitely in the top ten. I was very impressed by the original John Carpenter's Vampires and when I descovered there was a sequel I went straight out and bought it. This movie does not obey quite the same rules as the first, and it is not quite so dark, but it is close enough and I felt that it built nicely on the original.<br /><br />Jon Bon Jovi was very good as Derek Bliss: his performance was likeable and yet hard enough for the viewer to believe that he might actually be able to survive in the world in which he lives. One of my favourite parts was just after he meets Zoey and wanders into the bathroom of the diner to check to see if she is more than she seems. His comments are beautifully irreverant and yet emminently practical which contrast well with the rest of the scene as it unfolds.<br /><br />The other cast members were also well chosen and they knitted nicely to produce an entertaining and original film. It is not simply a rehash of the first movie and it has grown in a similar way to the way Fright Night II grew out of Fright Night. There are different elements which make it a fresh movie with a similar theme.<br /><br />If you like vampire movies I would recommend this one. If you prefer your films less bloody then choose something else.
0 Almost too well done... "John Carpenter's Vampires" was entertaining, a solid piece of popcorn-entertainment with a budget small enough not to be overrun by special effects. And obviously aiming on the "From Dusk Till Dawn"-audience. "Vampires: Los Muertos" tries the same starting with a rock-star Jon Bon Jovi playing one of the main characters, but does that almost too well...: I haven't seen Jon Bon Jovi in any other movie, so I am not able to compare his acting in "Vampires: Los Muertos" to his other roles, but I was really suprised of his good performance. After the movie started he convinced me not expecting him to grab any guitar and playing "It' my life" or something, but kill vampires, showing no mercy and doing a job which has to be done. This means a lot, because a part of the audience (also me) was probably thinking: "...just because he's a rockstar...". Of course Bon Jovi is not James Woods but to be honest: It could have been much worse, and in my opinion Bon Jovi did a very good performance. The vampiress played by Arly Jover is not the leather dressed killer-machine of a vampire-leader we met in Part 1 (or in similar way in "Ghosts of Mars"). Jover plays the vampire very seductive and very sexy, moving as lithe as a cat, attacking as fast as a snake and dressed in thin, light almost transparent very erotic cloth. And even the optical effects supporting her kind of movement are very well made. It really takes some beating. But the director is in some parts of the film only just avoiding turning the movie from an action-horrorfilm into a sensitive horrormovie like Murnau's "Nosferatu". You can almost see the director's temptation to create a movie with a VERY personal note and different to the original. This is the real strength of the movie and at the same time its weakest point: The audience celebrating the fun-bloodbath of the first movie is probably expecting a pure fun-bloodbath for the second time and might be a little disappointed. Make no mistake: "Vampires:Los Muertos" IS a fun-bloodbath but it's just not ALL THE TIME this kind of movie. Just think of the massacre in the bar compared to the scene in which the vampiress tries to seduce Zoey in the ruins: the bar-massacre is what you expect from american popcorn-entertainment, the seducing-Zoey-in-the-ruins-scene is ALMOST european-like cinema (the movie is eager to tell us more about the relationship between Zoey and the vampiress, but refuses answers at the same time. Because it would had slow down the action? Showed the audience a vampiress with a human past, a now suffering creature and not only a beast which is just slaughtering anybody). And that's the point to me which decides whether the movie is accepted by the audience of the original movie or not. And also: Is the "From Dusk Till Dawn"-audience really going to like this? I'm not sure about that. Nevertheless Tommy Lee Wallace did really a great job, "Vampires:Los Muertos" is surprisingly good. But I also think to direct a sequel of a popcorn movie Wallace is sometimes almost too creative, too expressive. Like he's keeping himself from developing his talent in order to satisfy the expectations of audience. In my opinion, Wallace' talent fills the movie with life and is maybe sometimes sucking it out at the same time. "Vampires: Los Muertos" is almost too well done. (I give it 7 of 10)
1 We all know that countless duds have graced the 80s slasher genre and often deserve nothing but our deepest disgust. Maybe that's a bit hastey but damn if "Slaughter High" wasn't terribly unoriginal, even for a slasher flick. Pretty much, the plot involves a kid who experienced a Carrie-like shower humiliation in high school and returns to the dilapidated building to seek out revenge on a group of former-bullies who all show up to reminisce. As you'd expect, they are killed off steadily by a masked madman on April 1st by means of electrocution, burning, hanging, and chemically altered beer. I've got a number of problems with the plot details and settings of this movie, but considering the ending, I feel the need to discard my complaints and just say that this is a complete waste of time. Ignore any thought of viewing this movie.
1 What a terrible movie. The acting was bad, the pacing was bad, the cinematography was bad, the directing was bad, the "special" effects were bad. You expect a certain degree of badness in a slasher, but even the killings were bad.<br /><br />First of all, the past event that set up the motive for the slaughter went on for 15 or 20 minutes. I thought it would never end. They could have removed 80% of it and explained what happened well enough.<br /><br />Then, the victims were invited to the "reunion" in an abandoned school which still had all the utilities turned on. One of the victims thought this was a little odd, but they dismissed it and decided to break in anyway.<br /><br />Finally, the killings were so fake as to be virtually unwatchable.<br /><br />There is no reason to watch this movie, unless you want to see some breasts, and not very good breasts at that. This movie makes Showgirls virtually indistinguishable from Citizen Kane.
0 It was a Sunday night and I was waiting for the advertised movie on TV. They said it was a comedy! The movie started, 10 minutes passed, after that 30 minutes and I didn't laugh not even once. The fact is that the movie ended and I didn't get even on echance to laugh.
0 I saw this piece of garbage on AMC last night, and wonder how it could be considered in any way an American Movie Classic. It was awful in every way. How badly did Jack Lemmon, James Stewart and the rest of the cast need cash that they would even consider doing this movie?
1 its not as good as the first movie,but its a good solid movie its has good car chase scenes,on the remake of this movie there a story for are hero to drive fast as his trying to rush to the side of his ailing wife,the ending is great just a good fair movie to watch in my opinion.
1 Rosalind Russell executes a power-house performance as Rosie Lord, a very wealthy woman with greedy heirs. With an Auntie Mame-type character, this actress can never go wrong. Her very-real terror at being in an insane assylum is a wonderful piece of acting. Everyone should watch this.
\ No newline at end of file
<html>
<head>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true
},
"HTML-CSS": { availableFonts: ["TeX"] }
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js" async></script>
<script type="text/javascript" src="../.tools/theme/marked.js">
</script>
<link href="http://cdn.bootcss.com/highlight.js/9.9.0/styles/darcula.min.css" rel="stylesheet">
<script src="http://cdn.bootcss.com/highlight.js/9.9.0/highlight.min.js"></script>
<link href="http://cdn.bootcss.com/bootstrap/4.0.0-alpha.6/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdn.jsdelivr.net/perfect-scrollbar/0.6.14/css/perfect-scrollbar.min.css" rel="stylesheet">
<link href="../.tools/theme/github-markdown.css" rel='stylesheet'>
</head>
<style type="text/css" >
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
</style>
<body>
<div id="context" class="container-fluid markdown-body">
</div>
<!-- This block will be replaced by each markdown file content. Please do not change lines below.-->
<div id="markdown" style='display:none'>
# 基于双层序列的文本分类
## 简介
序列是自然语言处理任务面对的一种主要输入数据类型:句子由词语构成,而多个句子进一步构成了段落。因此,段落可以看作是一个嵌套的序列(或者叫作:双层序列),这个序列的每个元素又是一个序列。
双层序列是 PaddlePaddle 支持的一种非常灵活的数据组织方式,能够帮助我们更好地描述段落、多轮对话等更为复杂的语言数据。以双层序列作为输入,我们可以设计一个层次化的网络,分别从词语和句子级别编码输入数据,从而更好地完成一些复杂的语言理解任务。
本例将演示如何在 PaddlePaddle 中将长文本输入(通常能达到段落或者篇章基本)组织为双层序列,完成对长文本的分类任务。
## 模型介绍
我们将一段文本看成句子的序列,而每个句子又是词语的序列。
我们首先用卷积神经网络编码段落中的每一句话;然后,将每句话的表示向量经过池化层得到段落的编码向量;最后将段落的编码向量作为分类器(以softmax层的全连接层)输入,得到最终的分类结果。
**模型结构如下图所示**
<p align="center">
<img src="images/model.jpg" width = "60%" align="center"/><br/>
图1. 基于双层序列的文本分类模型
</p>
PaddlePaddle 实现该网络结构的代码见 `network_conf.py`。
对双层时间序列的处理,需要先将双层时间序列数据变换成单层时间序列数据,再对每一个单层时间序列进行处理。 在 PaddlePaddle 中 ,`recurrent_group` 是帮助我们构建处理双层序列的层次化模型的主要工具。这里,我们使用两个嵌套的 `recurrent_group` 。外层的 `recurrent_group` 将段落拆解为句子,`step` 函数中拿到的输入是句子序列;内层的 `recurrent_group` 将句子拆解为词语,`step` 函数中拿到的输入是非序列的词语。
在词语级别,我们通过 CNN 网络以词向量为输入输出学习到的句子表示;在段落级别,将每个句子的表示通过池化作用得到段落表示。
``` python
nest_group = paddle.layer.recurrent_group(input=[paddle.layer.SubsequenceInput(emb),
hidden_size],
step=cnn_cov_group)
```
拆解后的单层序列数据经过一个CNN网络学习对应的向量表示,CNN的网络结构包含以下部分:
- **卷积层**: 文本分类中的卷积在时间序列上进行,卷积核的宽度和词向量层产出的矩阵一致,卷积后得到的结果为“特征图”, 使用多个不同高度的卷积核,可以得到多个特征图。本例代码默认使用了大小为 3(图1红色框)和 4(图1蓝色框)的卷积核。
- **最大池化层**: 对卷积得到的各个特征图分别进行最大池化操作。由于特征图本身已经是向量,因此最大池化实际上就是选出各个向量中的最大元素。将所有最大元素又被拼接在一起,组成新的向量。
- **线性投影层**: 将不同卷积得到的结果经过最大池化层之后拼接为一个长向量, 然后经过一个线性投影得到对应单层序列的表示向量。
CNN网络具体代码实现如下:
```python
def cnn_cov_group(group_input, hidden_size):
conv3 = paddle.networks.sequence_conv_pool(
input=group_input, context_len=3, hidden_size=hidden_size)
conv4 = paddle.networks.sequence_conv_pool(
input=group_input, context_len=4, hidden_size=hidden_size)
output_group = paddle.layer.fc(input=[conv3, conv4],
size=hidden_size,
param_attr=paddle.attr.ParamAttr(name='_cov_value_weight'),
bias_attr=paddle.attr.ParamAttr(name='_cov_value_bias'),
act=paddle.activation.Linear())
return output_group
```
PaddlePaddle 中已经封装好的带有池化的文本序列卷积模块:`paddle.networks.sequence_conv_pool`,可直接调用。
在得到每个句子的表示向量之后, 将所有句子表示向量经过一个平均池化层, 得到一个样本的向量表示, 向量经过一个全连接层输出最终的预测结果。 代码如下:
```python
avg_pool = paddle.layer.pooling(input=nest_group, pooling_type=paddle.pooling.Avg(),
agg_level=paddle.layer.AggregateLevel.TO_NO_SEQUENCE)
prob = paddle.layer.mixed(size=class_num,
input=[paddle.layer.full_matrix_projection(input=avg_pool)],
act=paddle.activation.Softmax())
```
## 使用 PaddlePaddle 内置数据运行
### 训练
在终端执行:
```bash
python train.py
```
将以 PaddlePaddle 内置的情感分类数据集: `imdb` 运行本例。
### 预测
训练结束后模型将存储在指定目录当中(默认models目录),在终端执行:
```bash
python infer.py
```
默认情况下,预测脚本将加载训练一个pass的模型对 `imdb的测试集` 进行测试。
## 使用自定义数据训练和预测
### 训练
1.数据组织
输入数据格式如下:每一行为一条样本,以 `\t` 分隔,第一列是类别标签,第二列是输入文本的内容。以下是两条示例数据:
```
1 This movie is very good. The actor is so handsome.
0 What a terrible movie. I waste so much time.
```
2.编写数据读取接口
自定义数据读取接口只需编写一个 Python 生成器实现**从原始输入文本中解析一条训练样本**的逻辑。以下代码片段实现了读取原始数据返回类型为: `paddle.data_type.integer_value_sub_sequence` 和 `paddle.data_type.integer_value`
```python
def train_reader(data_dir, word_dict):
"""
Reader interface for training data
:param data_dir: data directory
:type data_dir: str
:param word_dict: path of word dictionary,
the dictionary must has a "UNK" in it.
:type word_dict: Python dict
"""
def reader():
UNK_ID = word_dict['<unk>']
word_col = 1
lbl_col = 0
for file_name in os.listdir(data_dir):
file_path = os.path.join(data_dir, file_name)
if not os.path.isfile(file_path):
continue
with open(file_path, "r") as f:
for line in f:
line_split = line.strip().split("\t")
doc = line_split[word_col]
doc_ids = []
for sent in doc.strip().split("."):
sent_ids = [
word_dict.get(w, UNK_ID)
for w in sent.split()]
if sent_ids:
doc_ids.append(sent_ids)
yield doc_ids, int(line_split[lbl_col])
return reader
```
需要注意的是, 本例中以英文句号`'.'`作为分隔符, 将一段文本分隔为一定数量的句子, 且每个句子表示为对应词表的索引数组(`sent_ids`)。 由于当前样本的表示(`doc_ids`)中包含了该段文本的所有句子, 因此,它的类型为:`paddle.data_type.integer_value_sub_sequence`。
3.指定命令行参数进行训练
`train.py`训练脚本中包含以下参数:
```
Options:
--train_data_dir TEXT path of training dataset (default: None). if this
parameter is not set, imdb dataset will be used.
--test_data_dir TEXT path of testing dataset (default: None). if this
parameter is not set, imdb dataset will be used.
--word_dict_path TEXT path of word dictionary (default: None).if this
parameter is not set, imdb dataset will be used.if
this parameter is set, but the file does not exist,
word dictionay will be built from the training data
automatically.
--class_num INTEGER class number (default: 2).
--batch_size INTEGER the number of training examples in one batch
(default: 32).
--num_passes INTEGER number of passes to train (default: 10).
--model_save_dir TEXT path to save the trained models (default: 'models').
--help Show this message and exit.
```
修改`train.py`脚本中的启动参数,可以直接运行本例。 以`data`目录下的示例数据为例,在终端执行:
```bash
python train.py --train_data_dir 'data/train_data' --test_data_dir 'data/test_data' --word_dict_path 'dict.txt'
```
即可对样例数据进行训练。
### 预测
1.指定命令行参数
`infer.py`训练脚本中包含以下参数:
```
Options:
--data_path TEXT path of data for inference (default: None). if this
parameter is not set, imdb test dataset will be used.
--model_path TEXT path of saved model. (default:
'models/params_pass_00000.tar.gz')
--word_dict_path TEXT path of word dictionary (default: None).if this
parameter is not set, imdb dataset will be used.
--class_num INTEGER class number (default: 2).
--batch_size INTEGER the number of examples in one batch (default: 32).
--help Show this message and exit.
```
2.以`data`目录下的示例数据为例,在终端执行:
```bash
python infer.py --data_path 'data/infer.txt' --word_dict_path 'dict.txt'
```
即可对样例数据进行预测。
</div>
<!-- You can change the lines below now. -->
<script type="text/javascript">
marked.setOptions({
renderer: new marked.Renderer(),
gfm: true,
breaks: false,
smartypants: true,
highlight: function(code, lang) {
code = code.replace(/&amp;/g, "&")
code = code.replace(/&gt;/g, ">")
code = code.replace(/&lt;/g, "<")
code = code.replace(/&nbsp;/g, " ")
return hljs.highlightAuto(code, [lang]).value;
}
});
document.getElementById("context").innerHTML = marked(
document.getElementById("markdown").innerHTML)
</script>
</body>
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import os
import gzip
import click
import paddle.v2 as paddle
import reader
from network_conf import nest_net
from utils import logger
from utils import logger, load_dict
@click.command('infer')
@click.option(
"--data_path",
default=None,
help=("path of data for inference (default: None). "
"if this parameter is not set, "
"imdb test dataset will be used."))
@click.option(
"--model_path",
type=str,
default='models/params_pass_00000.tar.gz',
help=("path of saved model. "
"(default: 'models/params_pass_00000.tar.gz')"))
@click.option(
"--word_dict_path",
type=str,
default=None,
help=("path of word dictionary (default: None)."
"if this parameter is not set, imdb dataset will be used."))
@click.option(
"--class_num", type=int, default=2, help="class number (default: 2).")
@click.option(
"--batch_size",
type=int,
default=32,
help="the number of examples in one batch (default: 32).")
def infer(data_path, model_path, word_dict_path, batch_size, class_num):
def _infer_a_batch(inferer, test_batch, ids_2_word):
probs = inferer.infer(input=test_batch, field=["value"])
......@@ -24,6 +49,7 @@ def infer(data_path, model_path, word_dict_path, batch_size, class_num):
" ".join(["{:0.4f}".format(p)
for p in prob]), word_text))
assert os.path.exists(model_path), "the trained model does not exist."
logger.info("begin to predict...")
use_default_data = (data_path is None)
......@@ -37,7 +63,7 @@ def infer(data_path, model_path, word_dict_path, batch_size, class_num):
assert os.path.exists(
word_dict_path), "the word dictionary file does not exist"
word_dict = reader.load_dict(word_dict_path)
word_dict = load_dict(word_dict_path)
word_reverse_dict = dict((value, key)
for key, value in word_dict.iteritems())
......@@ -68,15 +94,4 @@ def infer(data_path, model_path, word_dict_path, batch_size, class_num):
if __name__ == "__main__":
model_path = "models/params_pass_00000.tar.gz"
assert os.path.exists(model_path), "the trained model does not exist."
infer_path = None
word_dict = None
infer(
data_path=infer_path,
word_dict_path=word_dict,
model_path=model_path,
batch_size=10,
class_num=2)
infer()
......@@ -6,12 +6,16 @@ def cnn_cov_group(group_input, hidden_size):
input=group_input, context_len=3, hidden_size=hidden_size)
conv4 = paddle.networks.sequence_conv_pool(
input=group_input, context_len=4, hidden_size=hidden_size)
#output_group = paddle.layer.concat(input=[conv3, conv4])
output_group = paddle.layer.fc(
input=[conv3, conv4],
size=hidden_size,
param_attr=paddle.attr.ParamAttr(name='_cov_value_weight'),
bias_attr=paddle.attr.ParamAttr(name='_cov_value_bias'),
act=paddle.activation.Linear())
return output_group
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
IMDB dataset.
......@@ -157,37 +155,6 @@ def imdb_word_dict():
re.compile("aclImdb/((train)|(test))/((pos)|(neg))/.*\.txt$"), 150)
def build_dict(data_dir, save_path, use_col=1, cutoff_fre=1):
values = collections.defaultdict(int)
for file_name in os.listdir(data_dir):
file_path = os.path.join(data_dir, file_name)
if not os.path.isfile(file_path):
continue
with open(file_path, "r") as fdata:
for line in fdata:
line_splits = line.strip().split("\t")
if len(line_splits) < use_col:
continue
doc = line_splits[use_col]
for sent in doc.strip().split("."):
for w in sent.split():
values[w] += 1
values['<unk>'] = cutoff_fre
with open(save_path, "w") as f:
for v, count in sorted(
values.iteritems(), key=lambda x: x[1], reverse=True):
if count < cutoff_fre:
break
f.write("%s\t%d\n" % (v, count))
def load_dict(dict_path):
return dict((line.strip().split("\t")[0], idx)
for idx, line in enumerate(open(dict_path, "r").readlines()))
def train_reader(data_dir, word_dict):
"""
Reader interface for training data
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import gzip
import click
import paddle.v2 as paddle
import reader
from network_conf import nest_net
from utils import logger, parse_train_cmd
def train(train_data_dir=None,
test_data_dir=None,
word_dict_path=None,
model_save_dir="models",
batch_size=32,
num_passes=10):
from utils import build_dict, load_dict, logger
@click.command('train')
@click.option(
"--train_data_dir",
default=None,
help=("path of training dataset (default: None). "
"if this parameter is not set, "
"imdb dataset will be used."))
@click.option(
"--test_data_dir",
default=None,
help=("path of testing dataset (default: None). "
"if this parameter is not set, "
"imdb dataset will be used."))
@click.option(
"--word_dict_path",
type=str,
default=None,
help=("path of word dictionary (default: None)."
"if this parameter is not set, imdb dataset will be used."
"if this parameter is set, but the file does not exist, "
"word dictionay will be built from "
"the training data automatically."))
@click.option(
"--class_num", type=int, default=2, help="class number (default: 2).")
@click.option(
"--batch_size",
type=int,
default=32,
help=("the number of training examples in one batch "
"(default: 32)."))
@click.option(
"--num_passes",
type=int,
default=10,
help="number of passes to train (default: 10).")
@click.option(
"--model_save_dir",
type=str,
default="models",
help="path to save the trained models (default: 'models').")
def train(train_data_dir, test_data_dir, word_dict_path, class_num,
model_save_dir, batch_size, num_passes):
"""
:params train_data_path: path of training data, if this parameter
is not specified, imdb dataset will be used to run this example
......@@ -34,6 +69,10 @@ def train(train_data_dir=None,
:params num_pass: train pass number
:type num_pass: int
"""
if train_data_dir is not None:
assert word_dict_path, ("the parameter train_data_dir, word_dict_path "
"should be set at the same time.")
if not os.path.exists(model_save_dir):
os.mkdir(model_save_dir)
......@@ -60,14 +99,14 @@ def train(train_data_dir=None,
# build the word dictionary to map the original string-typed
# words into integer-typed index
reader.build_dict(
build_dict(
data_dir=train_data_dir,
save_path=word_dict_path,
use_col=1,
cutoff_fre=0)
word_dict = reader.load_dict(word_dict_path)
class_num = args.class_num
word_dict = load_dict(word_dict_path)
class_num = class_num
logger.info("class number is : %d." % class_num)
train_reader = paddle.batch(
......@@ -145,19 +184,5 @@ def train(train_data_dir=None,
logger.info("Training has finished.")
def main(args):
train(
train_data_dir=args.train_data_dir,
test_data_dir=args.test_data_dir,
word_dict_path=args.word_dict,
batch_size=args.batch_size,
num_passes=args.num_passes,
model_save_dir=args.model_save_dir)
if __name__ == "__main__":
args = parse_train_cmd()
if args.train_data_dir is not None:
assert args.word_dict, ("the parameter train_data_dir, word_dict_path "
"should be set at the same time.")
main(args)
train()
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import os
import argparse
import logging
from collections import defaultdict
logger = logging.getLogger("paddle")
logger.setLevel(logging.INFO)
def parse_train_cmd():
parser = argparse.ArgumentParser(
description="PaddlePaddle text classification demo")
parser.add_argument(
"--train_data_dir",
type=str,
required=False,
help=("path of training dataset (default: None). "
"if this parameter is not set, "
"imdb dataset will be used."),
default=None)
parser.add_argument(
"--test_data_dir",
type=str,
required=False,
help=("path of testing dataset (default: None). "
"if this parameter is not set, "
"imdb dataset will be used."),
default=None)
parser.add_argument(
"--word_dict",
type=str,
required=False,
help=("path of word dictionary (default: None)."
"if this parameter is not set, imdb dataset will be used."
"if this parameter is set, but the file does not exist, "
"word dictionay will be built from "
"the training data automatically."),
default=None)
parser.add_argument(
"--class_num",
type=int,
required=False,
help=("class number."),
default=2)
parser.add_argument(
"--batch_size",
type=int,
default=32,
help="the number of training examples in one forward/backward pass")
parser.add_argument(
"--num_passes", type=int, default=10, help="number of passes to train")
parser.add_argument(
"--model_save_dir",
type=str,
required=False,
help=("path to save the trained models."),
default="models")
def build_dict(data_dir, save_path, use_col=1, cutoff_fre=1):
values = defaultdict(int)
for file_name in os.listdir(data_dir):
file_path = os.path.join(data_dir, file_name)
if not os.path.isfile(file_path):
continue
with open(file_path, "r") as fdata:
for line in fdata:
line_splits = line.strip().split("\t")
if len(line_splits) < use_col:
continue
doc = line_splits[use_col]
for sent in doc.strip().split("."):
for w in sent.split():
values[w] += 1
values['<unk>'] = cutoff_fre
with open(save_path, "w") as f:
for v, count in sorted(
values.iteritems(), key=lambda x: x[1], reverse=True):
if count < cutoff_fre:
break
f.write("%s\t%d\n" % (v, count))
return parser.parse_args()
def load_dict(dict_path):
return dict((line.strip().split("\t")[0], idx)
for idx, line in enumerate(open(dict_path, "r").readlines()))
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册