未验证 提交 9ec8847b 编写于 作者: 那伊抹微笑 提交者: GitHub

Merge pull request #1 from apachecn/v0.1.0

update from apachecn
...@@ -13,4 +13,41 @@ FastText 是一个用于高效学习单词表示和句子分类的库。 ...@@ -13,4 +13,41 @@ FastText 是一个用于高效学习单词表示和句子分类的库。
## 项目负责人 ## 项目负责人
* [@片刻](https://github.com/jiangzhonglian) * [@wnma](https://github.com/wnma3mz)
* [@Lisanaaa](https://github.com/Lisanaaa)
**维护组织:** [@ApacheCN](https://github.com/apachecn])
## 贡献者
### FastText 0.1.0 中文文档贡献者
| 标题 | 翻译 | 校对 |
| ------------------------------------------------------------ | ------------------------------------------ | ---- |
| [api](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/api.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [cheatsheet](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/cheatsheet.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [crawl-vectors](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/crawl-vectors.md) | [@GMbappe](https://github.com/GMbappe) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [dataset](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/dataset.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [english-vectors](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/english-vectors.md) | [@GMbappe](https://github.com/GMbappe) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [faqs](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/faqs.md) | [@Twinkle](https://github.com/kemingzeng) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [language-identification](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/language-identification.md) | [@wnma](https://github.com/wnma3mz) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [options](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/options.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [pretrained-vectors](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/pretrained-vectors.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [references](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/references.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [supervised-models](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/supervised-models.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [supervised-tutorial](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/supervised-tutorial.md) | [@Lisanaaa](https://github.com/Lisanaaa) | [@wnma](https://github.com/wnma3mz) |
| [support](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/support.md) | [@片刻](https://github.com/jiangzhonglian) | [@Lisanaaa](https://github.com/Lisanaaa) |
| [unsupervised-tutorials](https://github.com/apachecn/fasttext-doc-zh/blob/v0.1.0/doc/zh/unsupervised-tutorials.md) | [@wnma](https://github.com/wnma3mz) | [@Lisanaaa](https://github.com/Lisanaaa) |
## 加入我们
如果想要加入我们, 请参阅: <http://www.apachecn.org/organization/209.html>.
欢迎各位爱装逼的大佬们.
## 建议反馈
- 联系项目负责人 [@wnma](https://github.com/wnma3mz)或者 [@Lisanaaa](https://github.com/Lisanaaa).
- 在我们的 [apachecn/fasttext-doc-zh](https://github.com/apachecn/fasttext-doc-zh) github 上提 issue.
- 发邮件送到 Email: fasttext#apachecn.org (#替换成@) .
- 在我们的 [组织学习交流群](http://www.apachecn.org/organization/348.html) 中联系群主/管理员即可.
--- # API
id: api
title:API
---
We automatically generate our [API documentation](/docs/en/html/index.html) with doxygen. 我们使用 doxygen 自动生成我们的 [API documentation](/docs/en/html/index.html).
--- # Cheatsheet(备忘单)
id: cheatsheet
title: Cheatsheet
---
## Word representation learning ## Word representation learning(词表示学习)
In order to learn word vectors do: 为了学习单词向量:
```bash ```bash
$ ./fasttext skipgram -input data.txt -output model $ ./fasttext skipgram -input data.txt -output model
``` ```
## Obtaining word vectors ## Obtaining word vectors(获取单词向量)
Print word vectors for a text file `queries.txt` containing words. `queries.txt` 包含单词的文本文件打印单词向量.
```bash ```bash
$ ./fasttext print-word-vectors model.bin < queries.txt $ ./fasttext print-word-vectors model.bin < queries.txt
``` ```
## Text classification ## Text classification(文本分类)
In order to train a text classifier do: 为了训练一个文本分类器, 请执行以下操作:
```bash ```bash
$ ./fasttext supervised -input train.txt -output model $ ./fasttext supervised -input train.txt -output model
``` ```
Once the model was trained, you can evaluate it by computing the precision and recall at k (P@k and R@k) on a test set using: 一旦模型被训练完毕, 您可以使用以下公式计算测试集上的k (P@k and R@k) 的精准和召回来评估它:
```bash ```bash
$ ./fasttext test model.bin test.txt 1 $ ./fasttext test model.bin test.txt 1
``` ```
In order to obtain the k most likely labels for a piece of text, use: 为了获得一段文字的k个最可能的标签,使用:
```bash ```bash
$ ./fasttext predict model.bin test.txt k $ ./fasttext predict model.bin test.txt k
``` ```
In order to obtain the k most likely labels and their associated probabilities for a piece of text, use: 为了获得一段文字的k个最可能的标签及其相关概率,请使用:
```bash ```bash
$ ./fasttext predict-prob model.bin test.txt k $ ./fasttext predict-prob model.bin test.txt k
``` ```
If you want to compute vector representations of sentences or paragraphs, please use: 如果你想计算句子或段落的向量表示, 请使用:
```bash ```bash
$ ./fasttext print-sentence-vectors model.bin < text.txt $ ./fasttext print-sentence-vectors model.bin < text.txt
``` ```
## Quantization ## Quantization(量化)
In order to create a `.ftz` file with a smaller memory footprint do: 为了创建一个 `.ftz` 内存占用量较小的文件, 请执行以下操作:
```bash ```bash
$ ./fasttext quantize -output model $ ./fasttext quantize -output model
``` ```
All other commands such as test also work with this model 所有其他命令(如测试)也适用于此模型
```bash ```bash
$ ./fasttext test model.ftz test.txt $ ./fasttext test model.ftz test.txt
......
--- # 157种语言的词向量
id: crawl-vectors
title: Word vectors for 157 languages
---
We distribute pre-trained word vectors for 157 languages, trained on [*Common Crawl*](http://commoncrawl.org/) and [*Wikipedia*](https://www.wikipedia.org) using fastText. 我们发布了之前训练的 157 种语言的词向量,这些词向量是用 fasttext 在 [*Common Crawl*](http://commoncrawl.org/)[*Wikipedia*](https://www.wikipedia.org) 上训练得出的
These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives.
We also distribute three new word analogy datasets, for French, Hindi and Polish.
### Format 这些词向量是由 CBOW 训练而成,而且所使用的 CBOW 模型考虑了位置权重,包含了 300个 维度,并且也考虑了长度为 5,包含十个负样本的大小为 5 的窗体的字符 N 元模型。
The word vectors are available in both binary and text formats. 并且我们也发布了三种新的可供分析的数据集,分别是法语,印地语和波兰语。
Using the binary models, vectors for out-of-vocabulary words can be obtained with ### 格式
我们可以按照二进制和文本格式查看这些词向量
当使用二进制时,可以用如下命令查看在词汇表以外的单词向量
``` ```
$ ./fasttext print-word-vectors wiki.it.300.bin < oov_words.txt $ ./fasttext print-word-vectors wiki.it.300.bin < oov_words.txt
``` ```
where the file oov_words.txt contains out-of-vocabulary words.
In the text format, each line contain a word followed by its vector. 其中 oov_words.txt 文件包含了词汇表之外的单词
Each value is space separated, and words are sorted by frequency in descending order.
These text models can easily be loaded in Python using the following code:
在文本格式下,每一行包含一个单词,并且它的向量紧随其后
每个值都被空格分开,并且单词按照出现次数降序排列
只需要使用如下的代码,这些文本模型能在Python中轻松的下载:
```python ```python
import io import io
...@@ -33,21 +36,24 @@ def load_vectors(fname): ...@@ -33,21 +36,24 @@ def load_vectors(fname):
return data return data
``` ```
### Tokenization ### 分词
我们使用 [*Stanford word segmenter*](https://nlp.stanford.edu/software/segmenter.html) 对汉语分词,使用 [*Mecab*](http://taku910.github.io/mecab/) 对日语分词,使用 [*UETsegmenter*](https://github.com/phongnt570/UETsegmenter) 对越南语分词
We used the [*Stanford word segmenter*](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [*Mecab*](http://taku910.github.io/mecab/) for Japanese and [*UETsegmenter*](https://github.com/phongnt570/UETsegmenter) for Vietnamese. 对于使用拉丁文,西里尔文,希伯来文或希腊文的语言,我们用来自于 [*Europarl*](http://www.statmt.org/europarl/) 的预处理工具进行分词
For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [*Europarl*](http://www.statmt.org/europarl/) preprocessing tools.
For the remaining languages, we used the ICU tokenizer.
More information about the training of these models can be found in the article [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893). 剩下的语言,我们用 ICU 进行分词
### License 想要了解更多关于这些模型训练的信息,可以查看这篇文章 [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893).
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### 许可证明
### References 这些词向量发布在 [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/) 上面
If you use these word vectors, please cite the following paper: ### 参考资料
如果你使用这些词向量,请引用下面这些文章:
E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893)
...@@ -60,66 +66,66 @@ E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Ve ...@@ -60,66 +66,66 @@ E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Ve
} }
``` ```
### Evaluation datasets ### 评估数据集
The analogy evaluation datasets described in the paper are available here: [French](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-analogies/questions-words-fr.txt), [Hindi](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-analogies/questions-words-hi.txt), [Polish](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-analogies/questions-words-pl.txt).
### Models 在上面所描述的可供分析的评估数据集可以在下面的地址中得到:
[法语](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-analogies/questions-words-fr.txt), [印地语](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-analogies/questions-words-hi.txt), [波兰语](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-analogies/questions-words-pl.txt).
### 模型
The models can be downloaded from: 这些词向量可以从如下地址下载
|||| ||||
|-|-|-| |-|-|-|
| Afrikaans: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.af.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.af.300.vec.gz) | Albanian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sq.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sq.300.vec.gz) | Alemannic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.als.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.als.300.vec.gz) | | 南非荷兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.af.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.af.300.vec.gz) | 阿尔巴尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sq.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sq.300.vec.gz) | 阿勒曼尼语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.als.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.als.300.vec.gz) |
| Amharic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.am.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.am.300.vec.gz) | Arabic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ar.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ar.300.vec.gz) | Aragonese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.an.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.an.300.vec.gz) | | 阿姆哈拉语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.am.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.am.300.vec.gz) | 阿拉伯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ar.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ar.300.vec.gz) | 阿拉贡语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.an.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.an.300.vec.gz) |
| Armenian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hy.300.vec.gz) | Assamese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.as.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.as.300.vec.gz) | Asturian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ast.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ast.300.vec.gz) | | 亚美尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hy.300.vec.gz) | 阿萨姆语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.as.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.as.300.vec.gz) | 阿斯图里亚斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ast.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ast.300.vec.gz) |
| Azerbaijani: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.az.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.az.300.vec.gz) | Bashkir: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ba.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ba.300.vec.gz) | Basque: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eu.300.vec.gz) | | 阿塞拜疆语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.az.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.az.300.vec.gz) | 巴什基尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ba.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ba.300.vec.gz) | 巴斯克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eu.300.vec.gz) |
| Bavarian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bar.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bar.300.vec.gz) | Belarusian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.be.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.be.300.vec.gz) | Bengali: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bn.300.vec.gz) | | 巴伐利亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bar.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bar.300.vec.gz) | 白俄罗斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.be.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.be.300.vec.gz) | 孟加拉语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bn.300.vec.gz) |
| Bihari: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bh.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bh.300.vec.gz) | Bishnupriya Manipuri: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bpy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bpy.300.vec.gz) | Bosnian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bs.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bs.300.vec.gz) | | 比哈里语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bh.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bh.300.vec.gz) | Bishnupriya Manipuri: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bpy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bpy.300.vec.gz) | 波斯尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bs.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bs.300.vec.gz) |
| Breton: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.br.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.br.300.vec.gz) | Bulgarian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bg.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bg.300.vec.gz) | Burmese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.my.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.my.300.vec.gz) | | 布列塔尼语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.br.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.br.300.vec.gz) | 保加利亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bg.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bg.300.vec.gz) | 缅甸语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.my.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.my.300.vec.gz) |
| Catalan: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ca.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ca.300.vec.gz) | Cebuano: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ceb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ceb.300.vec.gz) | Central Bicolano: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bcl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bcl.300.vec.gz) | | 加泰罗尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ca.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ca.300.vec.gz) | 宿务语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ceb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ceb.300.vec.gz) | Central Bicolano: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bcl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bcl.300.vec.gz) |
| Chechen: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ce.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ce.300.vec.gz) | Chinese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zh.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zh.300.vec.gz) | Chuvash: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cv.300.vec.gz) | | 车臣语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ce.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ce.300.vec.gz) | 汉语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zh.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zh.300.vec.gz) | 楚瓦什语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cv.300.vec.gz) |
| Corsican: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.co.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.co.300.vec.gz) | Croatian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hr.300.vec.gz) | Czech: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cs.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cs.300.vec.gz) | | 科西嘉语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.co.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.co.300.vec.gz) | 克罗地亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hr.300.vec.gz) | 捷克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cs.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cs.300.vec.gz) |
| Danish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.da.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.da.300.vec.gz) | Divehi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.dv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.dv.300.vec.gz) | Dutch: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nl.300.vec.gz) | | 丹麦语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.da.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.da.300.vec.gz) | 迪维希语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.dv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.dv.300.vec.gz) | 荷兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nl.300.vec.gz) |
| Eastern Punjabi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pa.300.vec.gz) | Egyptian Arabic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.arz.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.arz.300.vec.gz) | Emilian-Romagnol: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eml.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eml.300.vec.gz) | | 东旁遮普邦语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pa.300.vec.gz) | 埃及阿拉伯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.arz.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.arz.300.vec.gz) | 艾米尼亚-Romagnol: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eml.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eml.300.vec.gz) |
| Erzya: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.myv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.myv.300.vec.gz) | Esperanto: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eo.300.vec.gz) | Estonian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.et.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.et.300.vec.gz) | | 俄日亚文: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.myv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.myv.300.vec.gz) | 世界语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.eo.300.vec.gz) | 爱沙尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.et.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.et.300.vec.gz) |
| Fiji Hindi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hif.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hif.300.vec.gz) | Finnish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fi.300.vec.gz) | French: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fr.300.vec.gz) | | 斐济印地语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hif.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hif.300.vec.gz) | 芬兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fi.300.vec.gz) | 法语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fr.300.vec.gz) |
| Galician: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gl.300.vec.gz) | Georgian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ka.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ka.300.vec.gz) | German: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.de.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.de.300.vec.gz) | | 加利西亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gl.300.vec.gz) | 格鲁吉亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ka.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ka.300.vec.gz) | 德语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.de.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.de.300.vec.gz) |
| Goan Konkani: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gom.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gom.300.vec.gz) | Greek: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.el.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.el.300.vec.gz) | Gujarati: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gu.300.vec.gz) | | Goan Konkani: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gom.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gom.300.vec.gz) | 希腊语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.el.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.el.300.vec.gz) | 古吉拉特语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gu.300.vec.gz) |
| Haitian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ht.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ht.300.vec.gz) | Hebrew: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.he.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.he.300.vec.gz) | Hill Mari: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mrj.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mrj.300.vec.gz) | | 海地语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ht.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ht.300.vec.gz) | 希伯来语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.he.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.he.300.vec.gz) | 希尔马里语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mrj.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mrj.300.vec.gz) |
| Hindi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hi.300.vec.gz) | Hungarian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hu.300.vec.gz) | Icelandic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.is.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.is.300.vec.gz) | | 印地语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hi.300.vec.gz) | 匈牙利语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hu.300.vec.gz) | 冰岛语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.is.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.is.300.vec.gz) |
| Ido: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.io.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.io.300.vec.gz) | Ilokano: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ilo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ilo.300.vec.gz) | Indonesian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.id.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.id.300.vec.gz) | | 伊多语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.io.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.io.300.vec.gz) | 伊洛卡诺语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ilo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ilo.300.vec.gz) | 印度尼西亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.id.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.id.300.vec.gz) |
| Interlingua: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ia.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ia.300.vec.gz) | Irish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ga.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ga.300.vec.gz) | Italian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.it.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.it.300.vec.gz) | | 国际语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ia.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ia.300.vec.gz) | 爱尔兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ga.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ga.300.vec.gz) | 意大利语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.it.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.it.300.vec.gz) |
| Japanese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ja.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ja.300.vec.gz) | Javanese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.jv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.jv.300.vec.gz) | Kannada: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kn.300.vec.gz) | | 日语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ja.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ja.300.vec.gz) | 爪哇语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.jv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.jv.300.vec.gz) | 卡纳达语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kn.300.vec.gz) |
| Kapampangan: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pam.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pam.300.vec.gz) | Kazakh: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kk.300.vec.gz) | Khmer: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.km.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.km.300.vec.gz) | | 印尼爪哇语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pam.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pam.300.vec.gz) | 哈萨克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.kk.300.vec.gz) | 高棉语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.km.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.km.300.vec.gz) |
| Kirghiz: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ky.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ky.300.vec.gz) | Korean: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ko.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ko.300.vec.gz) | Kurdish (Kurmanji): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ku.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ku.300.vec.gz) | | 吉尔吉斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ky.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ky.300.vec.gz) | 朝鲜语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ko.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ko.300.vec.gz) | 库尔德语(Kurmanji): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ku.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ku.300.vec.gz) |
| Kurdish (Sorani): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ckb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ckb.300.vec.gz) | Latin: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.la.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.la.300.vec.gz) | Latvian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lv.300.vec.gz) | | 库尔德语(Sorani): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ckb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ckb.300.vec.gz) | 拉丁语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.la.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.la.300.vec.gz) | 拉脱维亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lv.300.vec.gz) |
| Limburgish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.li.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.li.300.vec.gz) | Lithuanian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lt.300.vec.gz) | Lombard: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lmo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lmo.300.vec.gz) | | 林堡语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.li.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.li.300.vec.gz) | 立陶宛语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lt.300.vec.gz) | 隆巴德语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lmo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lmo.300.vec.gz) |
| Low Saxon: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nds.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nds.300.vec.gz) | Luxembourgish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lb.300.vec.gz) | Macedonian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mk.300.vec.gz) | | 低撒克逊语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nds.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nds.300.vec.gz) | 卢森堡语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.lb.300.vec.gz) | 马其顿语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mk.300.vec.gz) |
| Maithili: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mai.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mai.300.vec.gz) | Malagasy: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mg.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mg.300.vec.gz) | Malay: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ms.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ms.300.vec.gz) | | 迈蒂利语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mai.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mai.300.vec.gz) | 马尔加什语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mg.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mg.300.vec.gz) | 马来语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ms.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ms.300.vec.gz) |
| Malayalam: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ml.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ml.300.vec.gz) | Maltese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mt.300.vec.gz) | Manx: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gv.300.vec.gz) | | 马拉亚姆语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ml.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ml.300.vec.gz) | 马其他语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mt.300.vec.gz) | 马恩岛语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gv.300.vec.gz) |
| Marathi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mr.300.vec.gz) | Mazandarani: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mzn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mzn.300.vec.gz) | Meadow Mari: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mhr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mhr.300.vec.gz) | | 马拉语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mr.300.vec.gz) | Mazandarani: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mzn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mzn.300.vec.gz) | 东马里语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mhr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mhr.300.vec.gz) |
| Minangkabau: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.min.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.min.300.vec.gz) | Mingrelian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.xmf.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.xmf.300.vec.gz) | Mirandese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mwl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mwl.300.vec.gz) | | 米南加保语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.min.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.min.300.vec.gz) | 明格雷利亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.xmf.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.xmf.300.vec.gz) | 米兰德斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mwl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mwl.300.vec.gz) |
| Mongolian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mn.300.vec.gz) | Nahuatl: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nah.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nah.300.vec.gz) | Neapolitan: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nap.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nap.300.vec.gz) | | 蒙古语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.mn.300.vec.gz) | 纳瓦特尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nah.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nah.300.vec.gz) | 那不勒斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nap.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nap.300.vec.gz) |
| Nepali: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ne.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ne.300.vec.gz) | Newar: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.new.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.new.300.vec.gz) | North Frisian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.frr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.frr.300.vec.gz) | | 尼泊尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ne.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ne.300.vec.gz) | 尼瓦尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.new.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.new.300.vec.gz) | 北弗里斯兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.frr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.frr.300.vec.gz) |
| Northern Sotho: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nso.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nso.300.vec.gz) | Norwegian (Bokmål): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.no.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.no.300.vec.gz) | Norwegian (Nynorsk): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nn.300.vec.gz) | | 北索托语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nso.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nso.300.vec.gz) | 挪威语 (Bokmål): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.no.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.no.300.vec.gz) | 挪威语 (Nynorsk): [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.nn.300.vec.gz) |
| Occitan: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.oc.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.oc.300.vec.gz) | Oriya: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.or.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.or.300.vec.gz) | Ossetian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.os.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.os.300.vec.gz) | | 奥克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.oc.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.oc.300.vec.gz) | 奥里亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.or.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.or.300.vec.gz) | 南奥塞梯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.os.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.os.300.vec.gz) |
| Palatinate German: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pfl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pfl.300.vec.gz) | Pashto: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ps.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ps.300.vec.gz) | Persian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fa.300.vec.gz) | | 普法尔茨德语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pfl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pfl.300.vec.gz) | 普什图语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ps.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ps.300.vec.gz) | 波斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fa.300.vec.gz) |
| Piedmontese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pms.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pms.300.vec.gz) | Polish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pl.300.vec.gz) | Portuguese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pt.300.vec.gz) | | 皮埃蒙特语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pms.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pms.300.vec.gz) | 波兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pl.300.vec.gz) | 葡萄牙语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pt.300.vec.gz) |
| Quechua: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.qu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.qu.300.vec.gz) | Romanian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ro.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ro.300.vec.gz) | Romansh: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.rm.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.rm.300.vec.gz) | | 克丘亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.qu.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.qu.300.vec.gz) | 罗马尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ro.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ro.300.vec.gz) | 罗曼什语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.rm.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.rm.300.vec.gz) |
| Russian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ru.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ru.300.vec.gz) | Sakha: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sah.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sah.300.vec.gz) | Sanskrit: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sa.300.vec.gz) | | 俄语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ru.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ru.300.vec.gz) | 萨哈语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sah.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sah.300.vec.gz) | 梵文: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sa.300.vec.gz) |
| Sardinian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sc.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sc.300.vec.gz) | Scots: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sco.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sco.300.vec.gz) | Scottish Gaelic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gd.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gd.300.vec.gz) | | 撒丁岛语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sc.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sc.300.vec.gz) | 苏格兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sco.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sco.300.vec.gz) | 苏格兰盖尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gd.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.gd.300.vec.gz) |
| Serbian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sr.300.vec.gz) | Serbo-Croatian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sh.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sh.300.vec.gz) | Sicilian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.scn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.scn.300.vec.gz) | | 塞尔维亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sr.300.vec.gz) | 塞尔维亚 - 克罗地亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sh.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sh.300.vec.gz) | 西西里语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.scn.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.scn.300.vec.gz) |
| Sindhi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sd.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sd.300.vec.gz) | Sinhalese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.si.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.si.300.vec.gz) | Slovak: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sk.300.vec.gz) | | 信德语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sd.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sd.300.vec.gz) | 僧伽罗语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.si.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.si.300.vec.gz) | 斯洛伐克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sk.300.vec.gz) |
| Slovenian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sl.300.vec.gz) | Somali: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.so.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.so.300.vec.gz) | Southern Azerbaijani: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.azb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.azb.300.vec.gz) | | 斯洛文尼亚语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sl.300.vec.gz) | 索马里语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.so.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.so.300.vec.gz) | 南阿塞拜疆: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.azb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.azb.300.vec.gz) |
| Spanish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.es.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.es.300.vec.gz) | Sundanese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.su.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.su.300.vec.gz) | Swahili: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sw.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sw.300.vec.gz) | | 西班牙语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.es.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.es.300.vec.gz) | 巽他语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.su.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.su.300.vec.gz) | 斯瓦希里语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sw.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sw.300.vec.gz) |
| Swedish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sv.300.vec.gz) | Tagalog: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tl.300.vec.gz) | Tajik: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tg.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tg.300.vec.gz) | | 瑞典语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sv.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.sv.300.vec.gz) | 他加禄语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tl.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tl.300.vec.gz) | 塔吉克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tg.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tg.300.vec.gz) |
| Tamil: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ta.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ta.300.vec.gz) | Tatar: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tt.300.vec.gz) | Telugu: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.te.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.te.300.vec.gz) | | 泰米尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ta.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ta.300.vec.gz) | 鞑靼语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tt.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tt.300.vec.gz) | 泰卢固语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.te.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.te.300.vec.gz) |
| Thai: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.th.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.th.300.vec.gz) | Tibetan: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bo.300.vec.gz) | Turkish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tr.300.vec.gz) | | 泰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.th.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.th.300.vec.gz) | 藏语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.bo.300.vec.gz) | 土耳其语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tr.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tr.300.vec.gz) |
| Turkmen: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tk.300.vec.gz) | Ukrainian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uk.300.vec.gz) | Upper Sorbian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hsb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hsb.300.vec.gz) | | 土库曼语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.tk.300.vec.gz) | 乌克兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uk.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uk.300.vec.gz) | 上索布族语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hsb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.hsb.300.vec.gz) |
| Urdu: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ur.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ur.300.vec.gz) | Uyghur: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ug.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ug.300.vec.gz) | Uzbek: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uz.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uz.300.vec.gz) | | 乌尔都语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ur.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ur.300.vec.gz) | 维吾尔语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ug.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.ug.300.vec.gz) | 乌兹别克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uz.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.uz.300.vec.gz) |
| Venetian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vec.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vec.300.vec.gz) | Vietnamese: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vi.300.vec.gz) | Volapük: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vo.300.vec.gz) | | 威尼斯语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vec.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vec.300.vec.gz) | 越南语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vi.300.vec.gz) | 沃拉普克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vo.300.vec.gz) |
| Walloon: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.wa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.wa.300.vec.gz) | Waray: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.war.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.war.300.vec.gz) | Welsh: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cy.300.vec.gz) | | 华隆语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.wa.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.wa.300.vec.gz) | 瓦莱语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.war.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.war.300.vec.gz) | 威尔士语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.cy.300.vec.gz) |
| West Flemish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vls.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vls.300.vec.gz) | West Frisian: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fy.300.vec.gz) | Western Punjabi: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pnb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pnb.300.vec.gz) | | 西佛兰芒语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vls.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.vls.300.vec.gz) | West 弗里斯兰语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fy.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.fy.300.vec.gz) | 西旁遮普语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pnb.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.pnb.300.vec.gz) |
| Yiddish: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yi.300.vec.gz) | Yoruba: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yo.300.vec.gz) | Zazaki: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.diq.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.diq.300.vec.gz) | | 意第绪语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yi.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yi.300.vec.gz) | 约鲁巴语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yo.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.yo.300.vec.gz) | 扎扎其语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.diq.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.diq.300.vec.gz) |
| Zeelandic: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zea.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zea.300.vec.gz) | | 泽兰蒂克语: [bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zea.300.bin.gz), [text](https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.zea.300.vec.gz) |
--- # 数据集
id: dataset
title: Datasets
---
[Download YFCC100M Dataset](https://fb-public.box.com/s/htfdbrvycvroebv9ecaezaztocbcnsdn) [下载 YFCC100M 数据集](https://fb-public.box.com/s/htfdbrvycvroebv9ecaezaztocbcnsdn)
--- # 英文单词向量
id: english-vectors
title: English word vectors
---
This page gathers several pre-trained word vectors trained using fastText. 这一篇整合了一些之前用 fasttext 训练的词向量。
### Download pre-trained word vectors ### 下载经过训练的词向量
Pre-trained word vectors learned on different sources can be downloaded below: 你可以从下面下载单词向量,他们基于学习不同的数据来源,并且被预先训练过:
1. [wiki-news-300d-1M.vec.zip](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki-news-300d-1M.vec.zip): 1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens). 1. [wiki-news-300d-1M.vec.zip](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki-news-300d-1M.vec.zip) :一百万的词向量,这些词向量是在 2017 维基百科,UMBC 基于网络的语料库和 statmt.org 新闻数据集训练得到的(16B)
2. [wiki-news-300d-1M-subword.vec.zip](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki-news-300d-1M-subword.vec.zip): 1 million word vectors trained with subword infomation on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens). 2. [wiki-news-300d-1M-subword.vec.zip](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki-news-300d-1M-subword.vec.zip) : 一百万的带有子词信息的词向量,这些词向量是在 2017 维基百科,UMBC 基于网络的语料库和 statmt.org 新闻数据集的训练得到的(16B)
3. [crawl-300d-2M.vec.zip](https://s3-us-west-1.amazonaws.com/fasttext-vectors/crawl-300d-2M.vec.zip): 2 million word vectors trained on Common Crawl (600B tokens).
### Format 3. [crawl-300d-2M.vec.zip](https://s3-us-west-1.amazonaws.com/fasttext-vectors/crawl-300d-2M.vec.zip) : 两百万的词向量,这些词向量是在 Common Crawl 上训练得到的。(600B)
The first line of the file contains the number of words in the vocabulary and the size of the vectors. ### 格式
Each line contains a word followed by its vectors, like in the default fastText text format.
Each value is space separated. Words are ordered by descending frequency.
### License 文件的第一行包含了词汇表中单词的数量以及向量的大小。
每一行包含了一个单词和它的向量,就像是 fasttext 文本格式默认的那种样子。
每个值都是由空格隔开。
单词是按照频数降序排列的。
These word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### 许可证明
### References 这些词向量在 [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/) 可以看到。
If you use these word vectors, please cite the following paper: ### 参考资料
如果你使用了这些词向量,请引用下面的文章:
T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, A. Joulin. [*Advances in Pre-Training Distributed Word Representations*](https://arxiv.org/abs/1712.09405) T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, A. Joulin. [*Advances in Pre-Training Distributed Word Representations*](https://arxiv.org/abs/1712.09405)
......
--- # 常问问题
id: faqs
title:FAQ
---
## What is fastText? Are there tutorials? ## 什么是 fastText? 有教程吗?
FastText is a library for text classification and representation. It transforms text into continuous vectors that can later be used on any language related task. A few tutorials are available. FastText 是一个文本分类与表示的库. 它将文本转化为能用于任何语言相关任务的连续向量. 有一些教程可供使用.
## Why are my fastText models that big? ## 为什么我的 fastText 模型那么大?
fastText uses a hashtable for either word or character ngrams. The size of the hashtable directly impacts the size of a model. To reduce the size of the model, it is possible to reduce the size of this table with the option '-hash'. For example a good value is 20000. Another option that greatly impacts the size of a model is the size of the vectors (-dim). This dimension can be reduced to save space but this can significantly impact performance. If that still produce a model that is too big, one can further reduce the size of a trained model with the quantization option. FastText 使用散列表来表示单词或字符 ngram(n元模型). 散列表的大小直接影响模型的大小. 要减小模型的大小, 可以使用 '-hash' 选项, 例如一个很好的值是20000. 另一个大大影响模型大小的选项是矢量的大小 (-dim) . 可以减少此维度以节省空间, 但这可能会显著影响性能. 如果仍然产生太大的模型,可以使用量化选项进一步减小训练模型的大小.
```bash ```bash
./fasttext quantize -output model ./fasttext quantize -output model
``` ```
## What would be the best way to represent word phrases rather than words? ## 表示单词短语而不是单词的最佳方法是什么?
Currently the best approach to represent word phrases or sentence is to take a bag of words of word vectors. Additionally, for phrases like “New York”, preprocessing the data so that it becomes a single token “New_York” can greatly help. 目前, 表示单词短语或句子的最佳方法是将单词向量的单词做成词袋. 此外, 对于诸如“纽约”这样的短语, 预处理数据以使其成为单个标记 "New_York" 可以提供极大的帮助。
## Why does fastText produce vectors even for unknown words? ## 为什么 fastText 对未知词也产生向量?
One of the key features of fastText word representation is its ability to produce vectors for any words, even made-up ones. FastText 词表示的一个关键特征就是它能对任何词产生词向量, 即使是自制词.
Indeed, fastText word vectors are built from vectors of substrings of characters contained in it. 事实上, fastText 词向量是由包含在其中的字符字串构成的.
This allows to build vectors even for misspelled words or concatenation of words. 这甚至允许为拼写错误的单词或拼接单词创建词向量.
## Why is the hierarchical softmax slightly worse in performance than the full softmax? ## 为什么分层 softmax 的效果比完全 softmax 效果要略差一些?
The hierachical softmax is an approximation of the full softmax loss that allows to train on large number of class efficiently. This is often at the cost of a few percent of accuracy. 分层 softmax 是完全 softmax 损失函数的一种近似, 它能够在大量类的数据上高效训练. 这通常会损失一些精确度.
Note also that this loss is thought for classes that are unbalanced, that is some classes are more frequent than others. If your dataset has a balanced number of examples per class, it is worth trying the negative sampling loss (-loss ns -neg 100). 还要注意, 这个损失函数是针对某些类比其他类出现得更为频繁的类别不均衡情况的. 如果你的数据集中各类的样本均衡, 那么值得尝试一下负采样损失 (-loss ns -neg 100).
However, negative sampling will still be very slow at test time, since the full softmax will be computed. 然而, 负采样在测试时仍然会非常慢, 因为会计算完全 softmax.
## Can we run fastText program on a GPU? ## 我们可以在 GPU 上运行 fastText 程序吗?
FastText only works on CPU for accessibility. That being said, fastText has been implemented in the caffe2 library which can be run on GPU. FastText 由于可访问性只工作于 CPU. 就是说, fastText 已经可以在能运行于 GPU 的 caffe2 库中实现.
## Can I use fastText with python? Or other languages? ## 我能用 python 语言使用 fastText 吗? 或者其他语言?
There are few unofficial wrappers for python or lua available on github. Github 上几乎没有非官方的 python 或者 lua 包装器.
## Can I use fastText with continuous data? ## 我能用 fastText 处理连续数据吗?
FastText works on discrete tokens and thus cannot be directly used on continuous tokens. However, one can discretize continuous tokens to use fastText on them, for example by rounding values to a specific digit ("12.3" becomes "12"). FastText适用于离散标记, 因此不能直接用于连续标记. 但是, 可以将连续标记离散化以对其使用fastText, 例如将值四舍五入为特定数字 ("12.3" 变为 "12").
## There are misspellings in the dictionary. Should we improve text normalization? ## 词典中一些错误拼写的词. 我们应该提升文本规范化吗?
If the words are infrequent, there is no need to worry. 如果这些词出现频率不高, 无须理会.
## I'm encountering a NaN, why could this be? ## 我遇到了 NaN, 为什么会这样呢?
You'll likely see this behavior because your learning rate is too high. Try reducing it until you don't see this error anymore. 你出现这个情况可能是因为学习率太高. 尝试减小学习率直到看不到这个错误.
## My compiler / architecture can't build fastText. What should I do? ## 我的编译器 / 体系结构无法构建 fastText. 我该怎么办?
Try a newer version of your compiler. We try to maintain compatibility with older versions of gcc and many platforms, however sometimes maintaining backwards compatibility becomes very hard. In general, compilers and tool chains that ship with LTS versions of major linux distributions should be fair game. In any case, create an issue with your compiler version and architecture and we'll try to implement compatibility.
尝试新版本的编译器. 我们试图保持与老版本 gcc 和很多平台的兼容性, 然后有时候保持后端兼容变得非常难.
一般来说, 附带 LTS 版本的主要 linux 发行版的编译器和工具链应该都是没问题的. 遇到任何情况, 都可以创建一个你的编译器版本和体系结构的 issue(问题, 指在github上提出), 我们将尽力实现兼容性.
......
--- # 语言识别
id: language-identification
title: Language identification
---
### Description ### 说明
We distribute two models for language identification, which can recognize 176 languages (see the list of ISO codes below). These models were trained on data from [Wikipedia](https://www.wikipedia.org/), [Tatoeba](https://tatoeba.org/eng/) and [SETimes](http://nlp.ffzg.hr/resources/corpora/setimes/), used under [CC-BY-SA](http://creativecommons.org/licenses/by-sa/3.0/). 我们发布了两种语言识别模型,可以识别 176 种语言(请参阅下面的 ISO 代码列表)。 这些模型是通过 [Wikipedia](https://www.wikipedia.org/)[Tatoeba](https://tatoeba.org/eng/)[SETimes](http://nlp.ffzg.hr/resources/corpora/setimes/) 的数据进行训练的,在 [CC-BY-SA](http://creativecommons.org/licenses/by-sa/3.0/) 下使用。
We distribute two versions of the models: 我们发布两个版本的模型:
* [lid.176.bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/lid.176.bin), which is faster and slightly more accurate, but has a file size of 126MB ; * [lid.176.bin](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/lid.176.bin) ,这个模型更快更准确,但有一个文件的大小有 126 MB;
* [lid.176.ftz](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/lid.176.ftz), which is the compressed version of the model, with a file size of 917kB. * [lid.176.ftz](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/lid.176.ftz) ,这是个只有 917 kB 的压缩版的模型。
These models were trained on UTF-8 data, and therefore expect UTF-8 as input. 这些模型都是使用 UTF-8 数据进行训练的,因此需要使用 UTF-8 作为输入。
### License ### 许可
The models are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). 这些模型根据 [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/) 进行发布。
### List of supported languages ### 支持的语言列表
``` ```
af als am an ar arz as ast av az azb ba bar bcl be bg bh bn bo bpy br bs bxr ca cbk ce ceb ckb co cs cv cy da de diq dsb dty dv el eml en eo es et eu fa fi fr frr fy ga gd gl gn gom gu gv he hi hif hr hsb ht hu hy ia id ie ilo io is it ja jbo jv ka kk km kn ko krc ku kv kw ky la lb lez li lmo lo lrc lt lv mai mg mhr min mk ml mn mr mrj ms mt mwl my myv mzn nah nap nds ne new nl nn no oc or os pa pam pfl pl pms pnb ps pt qu rm ro ru rue sa sah sc scn sco sd sh si sk sl so sq sr su sv sw ta te tg th tk tl tr tt tyv ug uk ur uz vec vep vi vls vo wa war wuu xal xmf yi yo yue zh af als am ar ar as as ast av az azb ba bar bcl be bg bh bn bo bpy br bs bxr ca cbk ce ceb ckb co cs cv c cy cy de deqq dsb dty dv el eml en eo es et eu faf fr fr fyy ga gd gl gn gom gu gv he hi hif hr hsb ht hu hyia id ie ieo io is it ja jbo jv ka kk km kn ko krc ku kv kw ky la lb lez li lmo lo lrc lt lv mai mg mhr min mk ml mn mr mrj ms mt mwl my mym mzn nah nap nds ne new nl nn no oc or os pa pam pfl pl pms pnb ps pt qu rm ro ru rue sa sah scn sco sd sh si sl sl so sq sr su sv sw ta te tg th t t t tr t t tyv ug uk ur uz vec vep vi vls vo wa war wuu xal xmf yi yo yue zh
``` ```
### References ### 参考
If you use these models, please cite the following papers: 如果您使用这些模型,请引用以下论文:
[1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
``` ```
......
--- # 选项列表
id: options
title: List of options
---
Invoke a command without arguments to list available arguments and their default values: 调用不带参数的命令来列出可用参数及其默认值:
```bash ```bash
$ ./fasttext supervised $ ./fasttext supervised
Empty input or output path. 空的输入或输出路径.
The following arguments are mandatory: 以下参数是强制性的:
-input training file path -input 训练文件路径
-output output file path -output 输出文件路径
The following arguments are optional: 以下参数是可选的:
-verbose verbosity level [2] -verbose 冗长等级 [2]
The following arguments for the dictionary are optional: 以下字典参数是可选的:
-minCount minimal number of word occurences [5]  -minCount           词出现的最少次数 [5]
-minCountLabel minimal number of label occurences [0]  -minCountLabel     标签出现的最少次数 [0]
-wordNgrams max length of word ngram [1] -wordNgrams 单词 ngram 的最大长度 [1]
-bucket number of buckets [2000000]  -bucket             桶的个数 [2000000]
-minn min length of char ngram [3] -minn char ngram 的最小长度 [3]
-maxn max length of char ngram [6] -maxn char ngram 的最大长度 [6]
-t sampling threshold [0.0001] -t 抽样阈值 [0.0001]
-label labels prefix [__label__] -label 标签前缀 [__label__]
The following arguments for training are optional: 以下用于训练的参数是可选的:
-lr learning rate [0.05]  -lr                 学习速率 [0.05]
-lrUpdateRate change the rate of updates for the learning rate [100]  -lrUpdateRate       更改学习速率的更新速率 100]
-dim size of word vectors [100] -dim 字向量的大小 [100]
-ws size of the context window [5] -ws 上下文窗口的大小 [5]
-epoch number of epochs [5]  -epoch             迭代次数 [5]
-neg number of negatives sampled [5]  -neg               负样本个数 [5]
-loss loss function {ns, hs, softmax} [ns] -loss 损失函数 {ns, hs, softmax} [ns]
-thread number of threads [12]  -thread             线程个数 [12]
-pretrainedVectors pretrained word vectors for supervised learning []  -pretrainedVectors 监督学习的预训练词向量 []
-saveOutput whether output params should be saved [0] -saveOutput 是否应该保存输出参数 [0]
The following arguments for quantization are optional: 以下量化参数是可选的:
-cutoff number of words and ngrams to retain [0]  -cutoff             要保留的词和 ngram 的数量 [0]
-retrain finetune embeddings if a cutoff is applied [0]  -retrain           微调 embeddings(假如应用 -cutoff 参数的话) [0]
-qnorm quantizing the norm separately [0] -qnorm 分别量化范数 [0]
-qout quantizing the classifier [0] -qout 量化分类器 [0]
-dsub size of each sub-vector [2]  -dsub               每个子向量的大小 [2]
``` ```
Defaults may vary by mode. (Word-representation modes `skipgram` and `cbow` use a default `-minCount` of 5.) 默认值可能因模式而异。 (Word-representation 模型 `skipgram``cbow` 使用 5 作为 `-minCount` 的默认值)
--- # 维基词向量
id: pretrained-vectors
title: Wiki word vectors
---
We are publishing pre-trained word vectors for 294 languages, trained on [*Wikipedia*](https://www.wikipedia.org) using fastText. 我们正在为 294 种语言发布预训练的单词向量, 并使用 fastText 在 [*维基百科*](https://www.wikipedia.org) 上进行了训练. 这些 300 维的向量是通过使用 [*Bojanowski 等人 (2016)*](https://arxiv.org/abs/1607.04606) 描述的 skip-gram 模型(使用: 默认参数)获得的.
These vectors in dimension 300 were obtained using the skip-gram model described in [*Bojanowski et al. (2016)*](https://arxiv.org/abs/1607.04606) with default parameters.
Please note that a newer version of multi-lingual word vectors are available at: [https://fasttext.cc/docs/en/crawl-vectors.html]. 请注意, 新版本的多语言词语向量可在: [https://fasttext.cc/docs/en/crawl-vectors.html].
### Models ### Models(模型)
The models can be downloaded from: 这些模型可以从下面下载:
|||| ||||
|-|-|-| |-|-|-|
...@@ -113,19 +109,18 @@ The models can be downloaded from: ...@@ -113,19 +109,18 @@ The models can be downloaded from:
| Yiddish: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yi.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yi.vec) | Yoruba: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yo.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yo.vec) | Zazaki: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.diq.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.diq.vec) | | Yiddish: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yi.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yi.vec) | Yoruba: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yo.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.yo.vec) | Zazaki: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.diq.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.diq.vec) |
| Zeelandic: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zea.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zea.vec) | Zhuang: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.za.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.za.vec) | Zulu: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zu.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zu.vec) | | Zeelandic: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zea.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zea.vec) | Zhuang: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.za.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.za.vec) | Zulu: [*bin+text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zu.zip), [*text*](https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.zu.vec) |
### Format ### Format(格式化)
The word vectors come in both the binary and text default formats of fastText. 单词向量以 fastText 的二进制和文本默认格式出现.
In the text format, each line contain a word followed by its vector. Each value is space separated. 在文本格式中,每行包含一个单词,后面跟着它的向量. 每个值都是空格分隔的. 单词按降序排序.
Words are ordered by their frequency in a descending order.
### License ### License(许可证)
The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). 该词向量分布在知识 [*共享署名 - 相同方式共享3.0许可下*](https://creativecommons.org/licenses/by-sa/3.0/).
### References ### References(参考)
If you use these word vectors, please cite the following paper: 如果您使用这些单词向量, 请引用以下文章:
P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
......
--- # 参考
id: references
title: References
---
Please cite [1](#enriching-word-vectors-with-subword-information) if using this code for learning word representations or [2](#bag-of-tricks-for-efficient-text-classification) if using for text classification. 如果使用此代码学习词语表示, 请引用 [1](#enriching-word-vectors-with-subword-information); 如果使用文本分类, 请引用 [2](#bag-of-tricks-for-efficient-text-classification)
[1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606)
...@@ -27,7 +24,7 @@ Please cite [1](#enriching-word-vectors-with-subword-information) if using this ...@@ -27,7 +24,7 @@ Please cite [1](#enriching-word-vectors-with-subword-information) if using this
} }
``` ```
[3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: 压缩文本分类模型*](https://arxiv.org/abs/1612.03651)
```markup ```markup
@article{joulin2016fasttext, @article{joulin2016fasttext,
...@@ -38,4 +35,4 @@ Please cite [1](#enriching-word-vectors-with-subword-information) if using this ...@@ -38,4 +35,4 @@ Please cite [1](#enriching-word-vectors-with-subword-information) if using this
} }
``` ```
(\* These authors contributed equally.) (\* 这些作者贡献相同。)
--- # 监督模型
id: supervised-models
title: Supervised models
---
This page gathers several pre-trained supervised models on several datasets. 这个页面收集了几个预先训练好的监督模型,其训练数据来自于几个不同的数据集。
### Description ### Description(描述)
The regular models are trained using the procedure described in [1]. They can be reproduced using the classification-results.sh script within our github repository. The quantized models are build by using the respective supervised settings and adding the following flags to the quantize subcommand. 常规模型使用 [1] 中描述的步骤进行训练. 可以使用我们 github 存储库中的 classification-results.sh 脚本来复现它们。通过使用相应的监督设置并将以下参数添加到量化子命令中来构建量化模型.
```bash ```bash
-qnorm -retrain -cutoff 100000 -qnorm -retrain -cutoff 100000
``` ```
### Table of models ### Table of models(模型表格)
Each entry describes the test accuracy and size of the model. You can click on a table cell to download the corresponding model. 每个条目描述了模型的测试精度和大小。 您可以单击表格单元格来下载相应的模型.
| dataset | ag news | amazon review full | amazon review polarity | dbpedia | | 数据集 | ag 新闻 | 亚马逊评论全部 | 亚马逊评论极性 | dbpedia |
|-----------|-----------------------|-----------------------|------------------------|------------------------| |-----------|-----------------------|-----------------------|------------------------|------------------------|
| regular | [0.924 / 387MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/ag_news.bin) | [0.603 / 462MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_full.bin) | [0.946 / 471MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_polarity.bin) | [0.986 / 427MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/dbpedia.bin) | | regular | [0.924 / 387MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/ag_news.bin) | [0.603 / 462MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_full.bin) | [0.946 / 471MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_polarity.bin) | [0.986 / 427MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/dbpedia.bin) |
| compressed | [0.92 / 1.6MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/ag_news.ftz) | [0.599 / 1.6MB]( https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_full.ftz) | [0.93 / 1.6MB]( https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_polarity.ftz) | [0.984 / 1.7MB]( https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/dbpedia.ftz) | | compressed | [0.92 / 1.6MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/ag_news.ftz) | [0.599 / 1.6MB]( https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_full.ftz) | [0.93 / 1.6MB]( https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/amazon_review_polarity.ftz) | [0.984 / 1.7MB]( https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/dbpedia.ftz) |
| dataset | sogou news | yahoo answers | yelp review polarity | yelp review full | | 数据集 | 搜狗新闻 | 雅虎回答 | yelp review 极性 | yelp 评论全部 |
|-----------|----------------------|------------------------|----------------------|------------------------| |-----------|----------------------|------------------------|----------------------|------------------------|
| regular | [0.969 / 402MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/sogou_news.bin) | [0.724 / 494MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yahoo_answers.bin)| [0.957 / 409MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_polarity.bin)| [0.639 / 412MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_full.bin)| | regular | [0.969 / 402MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/sogou_news.bin) | [0.724 / 494MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yahoo_answers.bin)| [0.957 / 409MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_polarity.bin)| [0.639 / 412MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_full.bin)|
| compressed | [0.968 / 1.4MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/sogou_news.ftz) | [0.717 / 1.6MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yahoo_answers.ftz) | [0.957 / 1.5MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_polarity.ftz) | [0.636 / 1.5MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_full.ftz) | | compressed | [0.968 / 1.4MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/sogou_news.ftz) | [0.717 / 1.6MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yahoo_answers.ftz) | [0.957 / 1.5MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_polarity.ftz) | [0.636 / 1.5MB](https://s3-us-west-1.amazonaws.com/fasttext-vectors/supervised_models/yelp_review_full.ftz) |
### References ### References(参考)
If you use these models, please cite the following paper: 如果您使用这些模型, 请引用以下文章:
[1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) [1] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759)
...@@ -42,7 +39,7 @@ If you use these models, please cite the following paper: ...@@ -42,7 +39,7 @@ If you use these models, please cite the following paper:
} }
``` ```
[2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) [2] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: 压缩文本分类模型*](https://arxiv.org/abs/1612.03651)
```markup ```markup
@article{joulin2016fasttext, @article{joulin2016fasttext,
......
--- # 文本分类
id: supervised-tutorial
title: Text classification
---
Text classification is a core problem to many applications, like spam detection, sentiment analysis or smart replies. In this tutorial, we describe how to build a text classifier with the fastText tool. 文本分类是许多应用程序的核心问题,例如垃圾邮件检测,情感分析或智能回复。 在本教程中,我们将介绍如何使用fastText工具构建文本分类器。
## What is text classification? ## 什么是文本分类?
The goal of text classification is to assign documents (such as emails, posts, text messages, product reviews, etc...) to one or multiple categories. Such categories can be review scores, spam v.s. non-spam, or the language in which the document was typed. Nowadays, the dominant approach to build such classifiers is machine learning, that is learning classification rules from examples. In order to build such classifiers, we need labeled data, which consists of documents and their corresponding categories (or tags, or labels). 文本分类的目标是将文档(如电子邮件,博文,短信,产品评论等)分为一个或多个类别。 这些类别可以是根据评论分数,垃圾邮件与非垃圾邮件来划分,或者文档的编写语言。 如今,构建这种分类器的主要方法是机器学习,即从样本中学习分类规则。 为了构建这样的分类器,我们需要标注数据,它由文档及其相应的类别(也称为标签或标注)组成。
As an example, we build a classifier which automatically classifies stackexchange questions about cooking into one of several possible tags, such as `pot`, `bowl` or `baking`. 给一个例子,我们建立了一个分类器,该分类器将 [stack exchange](https://stackexchange.com/) 网站上有关烹饪的问题自动分类为几个可能的标签之一,例如`pot``bowl``baking`
## Installing fastText ## 安装 fastText
The first step of this tutorial is to install and build fastText. It only requires a c++ compiler with good support of c++11. 本教程的第一步是安装并构建 fastText。 它只需要一个能够良好支持 c++11 的 c++ 编译器。
Let us start by downloading the [most recent release](https://github.com/facebookresearch/fastText/releases): 让我们从下载[最新版本](https://github.com/facebookresearch/fastText/releases)开始:
```bash ```bash
$ wget https://github.com/facebookresearch/fastText/archive/v0.1.0.zip $ wget https://github.com/facebookresearch/fastText/archive/v0.1.0.zip
$ unzip v0.1.0.zip $ unzip v0.1.0.zip
``` ```
Move to the fastText directory and build it: 移动到 fastText 目录并构建它:
```bash ```bash
$ cd fastText-0.1.0 $ cd fastText-0.1.0
$ make $ make
``` ```
Running the binary without any argument will print the high level documentation, showing the different usecases supported by fastText: 运行二进制文件(不带任何参数)将输出高级文档,显示 fastText 支持的不同用例:
```bash ```bash
>> ./fasttext >> ./fasttext
...@@ -37,50 +34,50 @@ usage: fasttext <command> <args> ...@@ -37,50 +34,50 @@ usage: fasttext <command> <args>
The commands supported by fasttext are: The commands supported by fasttext are:
supervised train a supervised classifier  supervised             训练一个监督分类器
quantize quantize a model to reduce the memory usage quantize 量化模型以减少内存使用量
test evaluate a supervised classifier  test                   评估一个监督分类器
predict predict most likely labels  predict                 预测最有可能的标签
predict-prob predict most likely labels with probabilities  predict-prob           用概率预测最可能的标签
skipgram train a skipgram model skipgram 训练一个 skipgram 模型
cbow train a cbow model  cbow                   训练一个 cbow 模型
print-word-vectors print word vectors given a trained model  print-word-vectors     给定一个训练好的模型,打印出所有的单词向量
print-sentence-vectors print sentence vectors given a trained model  print-sentence-vectors 给定一个训练好的模型,打印出所有的句子向量
nn query for nearest neighbors nn 查询最近邻居
analogies query for analogies  analogies               查找所有同类词
``` ```
In this tutorial, we mainly use the `supervised`, `test` and `predict` subcommands, which corresponds to learning (and using) text classifier. For an introduction to the other functionalities of fastText, please see the [tutorial about learning word vectors](https://fasttext.cc/docs/en/unsupervised-tutorial.html). 在本教程中,我们主要使用`supervised``test``predict` 子命令,它们对应于学习(和使用)文本分类器。 有关 fastText 其他功能的介绍,请参见[单词向量的教程](https://fasttext.cc/docs/en/unsupervised-tutorial.html)
## Getting and preparing the data ## 获取和准备数据
As mentioned in the introduction, we need labeled data to train our supervised classifier. In this tutorial, we are interested in building a classifier to automatically recognize the topic of a stackexchange question about cooking. Let's download examples of questions from [the cooking section of Stackexchange](http://cooking.stackexchange.com/), and their associated tags: 正如上述介绍中所提到的,我们需要标记数据来训练我们的监督分类器。 在本教程中,我们将构建一个分类器来自动识别烹饪问题的类别。 让我们从[Stack exchange 网站的烹饪部分](http://cooking.stackexchange.com/)下载问题示例及其相应标签:
```bash ```bash
>> wget https://s3-us-west-1.amazonaws.com/fasttext-vectors/cooking.stackexchange.tar.gz && tar xvzf cooking.stackexchange.tar.gz >> wget https://s3-us-west-1.amazonaws.com/fasttext-vectors/cooking.stackexchange.tar.gz && tar xvzf cooking.stackexchange.tar.gz
>> head cooking.stackexchange.txt >> head cooking.stackexchange.txt
``` ```
Each line of the text file contains a list of labels, followed by the corresponding document. All the labels start by the `__label__` prefix, which is how fastText recognize what is a label or what is a word. The model is then trained to predict the labels given the word in the document. 文本文件的每一行都包含一个标签列表,其后是相应的文档。 所有标签都以 `__label__` 前缀开始,这就是 fastText 如何识别标签或单词是什么。 然后对模型进行训练,以预测给定文档的标签。
Before training our first classifier, we need to split the data into train and validation. We will use the validation set to evaluate how good the learned classifier is on new data. 在训练我们的第一个分类器之前,我们需要将数据分为训练集和验证集。 我们将使用验证集来评估该学习分类器对新数据的适用程度。
```bash ```bash
>> wc cooking.stackexchange.txt >> wc cooking.stackexchange.txt
15404 169582 1401900 cooking.stackexchange.txt 15404 169582 1401900 cooking.stackexchange.txt
``` ```
Our full dataset contains 15404 examples. Let's split it into a training set of 12404 examples and a validation set of 3000 examples: 我们的完整数据集包含 15404 样本。 我们将其分为拥有 12404 个样本的训练集和拥有 3000 个样本的验证集:
```bash ```bash
>> head -n 12404 cooking.stackexchange.txt > cooking.train >> head -n 12404 cooking.stackexchange.txt > cooking.train
>> tail -n 3000 cooking.stackexchange.txt > cooking.valid >> tail -n 3000 cooking.stackexchange.txt > cooking.valid
``` ```
## Our first classifier ## 我们的第一个分类器
We are now ready to train our first classifier: 我们现在准备好训练第一个分类器了:
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking >> ./fasttext supervised -input cooking.train -output model_cooking
...@@ -90,23 +87,23 @@ Number of labels: 734 ...@@ -90,23 +87,23 @@ Number of labels: 734
Progress: 100.0% words/sec/thread: 75109 lr: 0.000000 loss: 5.708354 eta: 0h0m Progress: 100.0% words/sec/thread: 75109 lr: 0.000000 loss: 5.708354 eta: 0h0m
``` ```
The `-input` command line option indicates the file containing the training examples, while the `-output` option indicates where to save the model. At the end of training, a file `model_cooking.bin`, containing the trained classifier, is created in the current directory. 命令行选项 `-input` 指示包含训练样本的文件,而选项 `-output` 指示保存模型的位置。 训练结束时,在当前目录中创建一个 `model_cooking.bin` 文件,其中包含了已经训练好的模型。
It is possible to directly test our classifier interactively, by running the command: 可以通过命令行直接交互式地测试分类器:
```bash ```bash
>> ./fasttext predict model_cooking.bin - >> ./fasttext predict model_cooking.bin -
``` ```
and then typing a sentence. Let's first try the sentence: 然后输入一个句子。我们来试一下:
*Which baking dish is best to bake a banana bread ?* *Which baking dish is best to bake a banana bread ?*
The predicted tag is `baking` which fits well to this question. Let us now try a second example: 其预测的标签是 `baking`,非常贴切。 现在让我们尝试第二个例子:
*Why not put knives in the dishwasher?* *Why not put knives in the dishwasher?*
The label predicted by the model is `food-safety`, which is not relevant. Somehow, the model seems to fail on simple examples. To get a better sense of its quality, let's test it on the validation data by running: 模型预测出的标签是 `food-safety`,这是不相关的。 不知道为什么,这个模型似乎在简单的例子上反而失败了。 为了更好地了解其预测质量,我们通过运行下面这个命令测试这个验证数据:
```bash ```bash
>> ./fasttext test model_cooking.bin cooking.valid >> ./fasttext test model_cooking.bin cooking.valid
...@@ -116,7 +113,7 @@ R@1 0.0541 ...@@ -116,7 +113,7 @@ R@1 0.0541
Number of examples: 3000 Number of examples: 3000
``` ```
The output of fastText are the precision at one (`P@1`) and the recall at one (`R@1`). We can also compute the precision at five and recall at five with: 只给定一个真实标签用于预测,fastText 的输出是精确度 (`P@1`) 和 召回率 (`R@1`)。我们也可以计算给定五个真实标签用于预测的精确度和召回率:
```bash ```bash
>> ./fasttext test model_cooking.bin cooking.valid 5 >> ./fasttext test model_cooking.bin cooking.valid 5
...@@ -126,31 +123,31 @@ R@5 0.146 ...@@ -126,31 +123,31 @@ R@5 0.146
Number of examples: 3000 Number of examples: 3000
``` ```
## Advanced readers: precision and recall ## 高级读者:精确度和召回率
The precision is the number of correct labels among the labels predicted by fastText. The recall is the number of labels that successfully were predicted, among all the real labels. Let's take an example to make this more clear: 精确度是由 fastText 所预测标签中正确标签的数量。 召回率是所有真实标签中被成功预测出的标签数量。 我们举一个例子来说明这一点:
*Why not put knives in the dishwasher?* *Why not put knives in the dishwasher?*
On Stack Exchange, this sentence is labeled with three tags: `equipment`, `cleaning` and `knives`. The top five labels predicted by the model can be obtained with: 在 Stack Exchange 上,这句话标有三个标签:`equipment``cleaning``knives`。 模型预测出的标签前五名可以通过以下方式获得:
```bash ```bash
>> ./fasttext predict model_cooking.bin - 5 >> ./fasttext predict model_cooking.bin - 5
``` ```
are `food-safety`, `baking`, `equipment`, `substitutions` and `bread`. 前五名是 `food-safety`, `baking`, `equipment`, `substitutions` and `bread`.
Thus, one out of five labels predicted by the model is correct, giving a precision of 0.20. Out of the three real labels, only one is predicted by the model, giving a recall of 0.33. 因此,模型预测的五个标签中有一个是正确的,精确度为 0.20。 在三个真实标签中,只有 `equipment` 标签被该模型预测出,召回率为 0.33。
For more details, see [the related Wikipedia page](https://en.wikipedia.org/wiki/Precision_and_recall). 更多详细信息,请参阅[相关维基百科页面](https://en.wikipedia.org/wiki/Precision_and_recall)
## Making the model better ## 使模型更好
The model obtained by running fastText with the default arguments is pretty bad at classifying new questions. Let's try to improve the performance, by changing the default parameters. 使用默认参数运行 fastText 所获得的模型在分类新问题时非常糟糕。 让我们尝试通过更改默认参数来提高性能。
### preprocessing the data ### 预处理数据
Looking at the data, we observe that some words contain uppercase letter or punctuation. One of the first step to improve the performance of our model is to apply some simple pre-processing. A crude normalization can be obtained using command line tools such as `sed` and `tr`: 观察这些数据,我们发现有些单词包含大写字母或标点符号。 提高我们模型性能的第一步之一是采取一些简单的预处理。 粗略的标准化可以通过命令行工具获得,例如`sed``tr`
```bash ```bash
>> cat cooking.stackexchange.txt | sed -e "s/\([.\!?,'/()]\)/ \1 /g" | tr "[:upper:]" "[:lower:]" > cooking.preprocessed.txt >> cat cooking.stackexchange.txt | sed -e "s/\([.\!?,'/()]\)/ \1 /g" | tr "[:upper:]" "[:lower:]" > cooking.preprocessed.txt
...@@ -158,7 +155,7 @@ Looking at the data, we observe that some words contain uppercase letter or punc ...@@ -158,7 +155,7 @@ Looking at the data, we observe that some words contain uppercase letter or punc
>> tail -n 3000 cooking.preprocessed.txt > cooking.valid >> tail -n 3000 cooking.preprocessed.txt > cooking.valid
``` ```
Let's train a new model on the pre-processed data: 让我们在被预处理过的数据上训练一个新的模型吧:
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking >> ./fasttext supervised -input cooking.train -output model_cooking
...@@ -174,11 +171,11 @@ R@1 0.0717 ...@@ -174,11 +171,11 @@ R@1 0.0717
Number of examples: 3000 Number of examples: 3000
``` ```
We observe that thanks to the pre-processing, the vocabulary is smaller (from 14k words to 9k). The precision is also starting to go up by 4%! 我们观察到,由于预处理,词汇量变得更小了(从 14k 到 9k)。精确度也开始提高了4%!
### more epochs and larger learning rate ### 更多的迭代和更快的学习速率
By default, fastText sees each training example only five times during training, which is pretty small, given that our training set only have 12k training examples. The number of times each examples is seen (also known as the number of epochs), can be increased using the `-epoch` option: 默认情况下,由于我们的训练集只有 12k 个训练样本,因此 fastText 在训练过程中仅看到每个训练样例五次,这太少了。 可以使用 -epoch 选项增加每个示例出现的次数(也称为迭代数):
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking -epoch 25 >> ./fasttext supervised -input cooking.train -output model_cooking -epoch 25
...@@ -188,7 +185,7 @@ Number of labels: 734 ...@@ -188,7 +185,7 @@ Number of labels: 734
Progress: 100.0% words/sec/thread: 77633 lr: 0.000000 loss: 7.147976 eta: 0h0m Progress: 100.0% words/sec/thread: 77633 lr: 0.000000 loss: 7.147976 eta: 0h0m
``` ```
Let's test the new model: 让我们测试一下新模型:
```bash ```bash
>> ./fasttext test model_cooking.bin cooking.valid >> ./fasttext test model_cooking.bin cooking.valid
...@@ -198,7 +195,7 @@ R@1 0.218 ...@@ -198,7 +195,7 @@ R@1 0.218
Number of examples: 3000 Number of examples: 3000
``` ```
This is much better! Another way to change the learning speed of our model is to increase (or decrease) the learning rate of the algorithm. This corresponds to how much the model changes after processing each example. A learning rate of 0 would means that the model does not change at all, and thus, does not learn anything. Good values of the learning rate are in the range `0.1 - 1.0`. 这好多了! 另一种改变我们模型学习速度的方法是增加(或减少)算法的学习速率。 这对应于处理每个样本后模型变化的幅度。 学习率为 0 意味着模型根本不会改变,因此不会学到任何东西。好的学习速率在 `0.1 - 1.0` 范围内。
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 >> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0
...@@ -214,7 +211,7 @@ R@1 0.245 ...@@ -214,7 +211,7 @@ R@1 0.245
Number of examples: 3000 Number of examples: 3000
``` ```
Even better! Let's try both together: 更好了!我们试试学习速率和迭代次数两个参数一起变化:
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 >> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25
...@@ -230,11 +227,11 @@ R@1 0.255 ...@@ -230,11 +227,11 @@ R@1 0.255
Number of examples: 3000 Number of examples: 3000
``` ```
Let us now add a few more features to improve even further our performance! 现在让我们多添加一些功能来进一步提高我们的性能!
### word n-grams ### word n-grams
Finally, we can improve the performance of a model by using word bigrams, instead of just unigrams. This is especially important for classification problems where word order is important, such as sentiment analysis. 最后,我们可以通过使用 word bigrams 而不是 unigrams 来提高模型的性能。 这对于词序重要的分类问题尤其重要,例如情感分析。
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 -wordNgrams 2 >> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 -wordNgrams 2
...@@ -250,29 +247,29 @@ R@1 0.261 ...@@ -250,29 +247,29 @@ R@1 0.261
Number of examples: 3000 Number of examples: 3000
``` ```
With a few steps, we were able to go from a precision at one of 12.4% to 59.9%. Important steps included: 只需几个步骤,我们就可以从 12.4% 到达 59.9% 的精度。 重要步骤包括:
* preprocessing the data ; * 预处理数据 ;
* changing the number of epochs (using the option `-epoch`, standard range `[5 - 50]`) ; * 改变迭代次数 (使用选项 `-epoch`, 标准范围 `[5 - 50]`) ;
* changing the learning rate (using the option `-lr`, standard range `[0.1 - 1.0]`) ; * 改变学习速率 (使用选项 `-lr`, 标准范围 `[0.1 - 1.0]`) ;
* using word n-grams (using the option `-wordNgrams`, standard range `[1 - 5]`). * 使用 word n-grams (使用选项 `-wordNgrams`, 标准范围 `[1 - 5]`).
## Advanced readers: What is a Bigram? ## 高级读者: 什么是 Bigram?
A 'unigram' refers to a single undividing unit, or token, usually used as an input to a model. For example a unigram can a word or a letter depending on the model. In fastText, we work at the word level and thus unigrams are words. 'unigram' 指的是单个不可分割的单位或标记,通常用作模型的输入。 例如,根据模型的不同,'unigram' 可以是单词或字母。 在 fastText 中,我们在单词级别工作,因此 unigrams 是单词。
Similarly we denote by 'bigram' the concatenation of 2 consecutive tokens or words. Similarly we often talk about n-gram to refer to the concatenation any n consecutive tokens. 类似地,我们用 'bigram' 表示2个连续标记或单词的连接。 类似地,我们经常谈论 n-gram 来引用任意 n 个连续标记或单词的级联。
For example, in the sentence, 'Last donut of the night', the unigrams are 'last', 'donut', 'of', 'the' and 'night'. The bigrams are: 'Last donut', 'donut of', 'of the' and 'the night'. 例如,在 'Last donut of the night' 这个句子中,unigrams是 'last','donut','of','the' 和 'night'。 bigrams 是 'Last donut', 'donut of', 'of the' 和 'the night'。
Bigrams are particularly interesting because, for most sentences, you can reconstruct the order of the words just by looking at a bag of n-grams. Bigrams 特别有趣,因为对于大多数句子,只需查看 n-gram 的集合即可重建句子中单词的顺序。
Let us illustrate this by a simple exercise, given the following bigrams, try to reconstruct the original sentence: 'all out', 'I am', 'of bubblegum', 'out of' and 'am all'. 让我们通过一个简单的练习来说明这一点,给定以下 bigrams,试着重构原始的句子:'all out','I am','bubblegum','out of' 和 'all all'。
It is common to refer to a word as a unigram. 通常将一个单词称为一个 unigram。
## Scaling things up ## 扩大规模
Since we are training our model on a few thousands of examples, the training only takes a few seconds. But training models on larger datasets, with more labels can start to be too slow. A potential solution to make the training faster is to use the hierarchical softmax, instead of the regular softmax [Add a quick explanation of the hierarchical softmax]. This can be done with the option `-loss hs`: 由于我们正在通过几千个示例来训练我们的模型,所以训练只需几秒钟。但是在更大的数据集上训练模型,使用更多的标签可能会太慢。 使训练更快的潜在解决方案是使用hierarchical softmax,而不是 regular softmax [添加 hierarchical softmax 的快速解释]。 这可以通过选项 `-loss hs` 完成:
```bash ```bash
>> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 -wordNgrams 2 -bucket 200000 -dim 50 -loss hs >> ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 -wordNgrams 2 -bucket 200000 -dim 50 -loss hs
...@@ -282,8 +279,8 @@ Number of labels: 734 ...@@ -282,8 +279,8 @@ Number of labels: 734
Progress: 100.0% words/sec/thread: 2199406 lr: 0.000000 loss: 1.718807 eta: 0h0m Progress: 100.0% words/sec/thread: 2199406 lr: 0.000000 loss: 1.718807 eta: 0h0m
``` ```
Training should now take less than a second. 现在训练时间应该不到一秒。
## Conclusion ## 结论
In this tutorial, we gave a brief overview of how to use fastText to train powerful text classifiers. We had a light overview of some of the most important options to tune. 在本教程中,我们简要介绍了如何使用 fastText 来训练强大的文本分类器。 我们对一些最重要的可调节选项(如 `epoch` )进行了简要介绍。
--- # 快速入门
id: support
title: Get started
---
## What is fastText? ## 什么是快速文本?
fastText is a library for efficient learning of word representations and sentence classification. fastText 是一个用于高效学习单词表示和句子分类的库。
## Requirements ## 要求
fastText builds on modern Mac OS and Linux distributions. fastText 建立在现代 Mac OS 和 Linux 发行版上.
Since it uses C++11 features, it requires a compiler with good C++11 support. 由于它使用 C++11功能, 因此需要具有良好 C++11 支持的编译器.
These include : 这些包括:
* (gcc-4.6.3 or newer) or (clang-3.3 or newer) * (gcc-4.6.3 or newer) or (clang-3.3 or newer)
Compilation is carried out using a Makefile, so you will need to have a working **make**. 使用 Makefile 进行编译, 因此您需要有一个可行的 **make**. 对于单词相似性评估脚本, 您需要:
For the word-similarity evaluation script you will need:
* python 2.6 or newer * python 2.6 or newer
* numpy & scipy * numpy & scipy
## Building fastText ## 建立快速文本
In order to build `fastText`, use the following:
为了构建 `fastText`, 请使用以下内容:
```bash ```bash
$ git clone https://github.com/facebookresearch/fastText.git $ git clone https://github.com/facebookresearch/fastText.git
...@@ -31,6 +28,6 @@ $ cd fastText ...@@ -31,6 +28,6 @@ $ cd fastText
$ make $ make
``` ```
This will produce object files for all the classes as well as the main binary `fasttext`. 这将产生所有类以及主二进制文件的目标文件 `fasttext`
If you do not plan on using the default system-wide compiler, update the two macros defined at the beginning of the Makefile (CC and INCLUDES). 如果您不打算使用默认的系统范围编译器, 请更新 Makefile 开头定义的两个宏 (CC 和 INCLUDES)。
--- # 词语表达
id: unsupervised-tutorial
title: Word representations
---
A popular idea in modern machine learning is to represent words by vectors. These vectors capture hidden information about a language, like word analogies or semantic. It is also used to improve performance of text classifiers.
In this tutorial, we show how to build these word vectors with the fastText tool. To download and install fastText, follow the first steps of [the tutorial on text classification](https://fasttext.cc/docs/en/supervised-tutorial.html). 现代机器学习中的一个普遍观点是用向量表示单词。这些向量获取有关语言的隐藏信息,如词类或语义。它也被用来提高文本分类器的性能。
## Getting the data 在本教程中,我们将演示如何使用 fastText 来构建这些词向量。需要下载并安装 fastText,请按照[文本分类教程](https://fasttext.cc/docs/en/supervised-tutorial.html)的第一步进行操作。
In order to compute word vectors, you need a large text corpus. Depending on the corpus, the word vectors will capture different information. In this tutorial, we focus on Wikipedia's articles but other sources could be considered, like news or Webcrawl (more examples [here](http://statmt.org/)). To download a raw dump of Wikipedia, run the following command: ## 获取数据
为了计算词向量,你需要一个大型的文本语料库。根据语料库,词向量将获取不同的信息。在本教程中,我们使用了维基百科的文章,但可以考虑其他来源,如新闻或网页抓取(更多示例在[这里](http://statmt.org/))。要下载维基百科的原始转储,请运行下面的命令:
```bash ```bash
wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2 wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2
``` ```
Downloading the Wikipedia corpus takes some time. Instead, lets restrict our study to the first 1 billion bytes of English Wikipedia. They can be found on Matt Mahoney's [website](http://mattmahoney.net/): 下载维基百科语料库需要一些时间。有一种替代方案就是我们只研究英语维基百科的前 10 亿字节(大概 1G 不到)。可以在 Matt Mahoney 的[网站](http://mattmahoney.net/)上找到:
```bash ```bash
$ mkdir data $ mkdir data
...@@ -22,33 +20,33 @@ $ wget -c http://mattmahoney.net/dc/enwik9.zip -P data ...@@ -22,33 +20,33 @@ $ wget -c http://mattmahoney.net/dc/enwik9.zip -P data
$ unzip data/enwik9.zip -d data $ unzip data/enwik9.zip -d data
``` ```
A raw Wikipedia dump contains a lot of HTML / XML data. We pre-process it with the wikifil.pl script bundled with fastText (this script was originally developed by Matt Mahoney, and can be found on his [website](http://mattmahoney.net/) ) 原始维基百科转储包含大量的 HTML/XML 数据。我们使用与 fastText 一起打包的 `wikifil.pl` 脚本对其进行预处理(该脚本最初由 Matt Mahoney 开发,可以在他的[网站](http://mattmahoney.net/)上找到)
```bash ```bash
$ perl wikifil.pl data/enwik9 > data/fil9 $ perl wikifil.pl data/enwik9 > data/fil9
``` ```
We can check the file by running the following command: 我们可以通过运行下面的命令来检查文件:
```bash ```bash
$ head -c 80 data/fil9 $ head -c 80 data/fil9
anarchism originated as a term of abuse first used against early working class anarchism originated as a term of abuse first used against early working class
``` ```
The text is nicely pre-processed and can be used to learn our word vectors. 这个文本经过了很好地预处理,可以用来学习我们的词向量。
## Training word vectors ## 训练词向量
Learning word vectors on this data can now be achieved with a single command: 基于这个数据来学习词向量只需要一个命令就能实现:
```bash ```bash
$ mkdir result $ mkdir result
$ ./fasttext skipgram -input data/fil9 -output result/fil9 $ ./fasttext skipgram -input data/fil9 -output result/fil9
``` ```
To decompose this command line: ./fastext calls the binary fastText executable (see how to install fastText here) with the 'skipgram' model (it can also be 'cbow'). We then specify the requires options '-input' for the location of the data and '-output' for the location where the word representations will be saved. 解释这行命令: `./fastext`**skipgram** 模型(或者是 **cbow** 模型)调用 fastText 二进制的可执行文件(在这里参考如何安装 fastText )。然后,'-input' 选项要求我们指定输入数据的位置,'-output' 指定输出要保存的位置。
While fastText is running, the progress and estimated time to completion is shown on your screen. Once the program finishes, there should be two files in the result directory: 当 fastText 运行时,屏幕上会显示进度和预计完成时间。一旦程序结束,`result` 目录中应该有两个文件:
```bash ```bash
$ ls -l result $ ls -l result
...@@ -56,7 +54,7 @@ $ ls -l result ...@@ -56,7 +54,7 @@ $ ls -l result
-rw-r-r-- 1 bojanowski 1876110778 190004182 Dec 20 11:01 fil9.vec -rw-r-r-- 1 bojanowski 1876110778 190004182 Dec 20 11:01 fil9.vec
``` ```
The `fil9.bin` file is a binary file that stores the whole fastText model and can be subsequently loaded. The `fil9.vec` file is a text file that contains the word vectors, one per line for each word in the vocabulary: `fil9.bin` 是一个二进制文件,用于存储整个 fastText 模型,并可以在之后重新加载。 `fil9.vec` 是一个包含词向量的文本文件,词汇表中的一个单词对应一行:
```bash ```bash
$ head -n 4 result/fil9.vec $ head -n 4 result/fil9.vec
...@@ -66,55 +64,54 @@ of -0.0083724 0.0059414 -0.046618 -0.072735 0.83007 0.038895 -0.13634 0.60063 .. ...@@ -66,55 +64,54 @@ of -0.0083724 0.0059414 -0.046618 -0.072735 0.83007 0.038895 -0.13634 0.60063 ..
one 0.32731 0.044409 -0.46484 0.14716 0.7431 0.24684 -0.11301 0.51721 0.73262 ... one 0.32731 0.044409 -0.46484 0.14716 0.7431 0.24684 -0.11301 0.51721 0.73262 ...
``` ```
The first line is a header containing the number of words and the dimensionality of the vectors. The subsequent lines are the word vectors for all words in the vocabulary, sorted by decreasing frequency. 第一行说明了单词数量和向量维数。随后的行是词汇表中所有单词的词向量,按降序排列。
## Advanced readers: skipgram versus cbow ## 高级读者:skipgram 与 cbow 两种模型
fastText provides two models for computing word representations: skipgram and cbow ('**c**ontinuous-**b**ag-**o**f-**w**ords'). fastText 提供了两种用于计算词表示的模型:skipgram 和 cbow ('**c**ontinuous-**b**ag-**o**f-**w**ords')。
The skipgram model learns to predict a target word thanks to a nearby word. On the other hand, the cbow model predicts the target word according to its context. The context is represented as a bag of the words contained in a fixed size window around the target word. skipgram 模型是学习近邻的单词来预测目标单词。 另一方面, cbow 模型是根据目标词的上下文来预测目标词。上下文是指在目标词左边和右边的固定单词数和。
Let us illustrate this difference with an example: given the sentence *'Poets have been mysteriously silent on the subject of cheese'* and the target word '*silent*', a skipgram model tries to predict the target using a random close-by word, like '*subject' *or* '*mysteriously*'**. *The cbow model takes all the words in a surrounding window, like {*been, *mysteriously*, on, the*}, and uses the sum of their vectors to predict the target. The figure below summarizes this difference with another example. 让我们用一个例子来说明这种差异:给出句子 *'Poets have been mysteriously silent on the subject of cheese'* 和目标单词 '*silent*',skipgram 模型随机取近邻词尝试预测目标词,如 '*subject*'或'*mysteriously*'。cbow 模型使用目标单词固定数量的左边和右边单词,,如 {*been*, *mysteriously*, *on*, *the*},并使用它们的向量和来预测目标单词。下图用另一个例子总结了这种差异。
![cbow vs skipgram](https://fasttext.cc/img/cbo_vs_skipgram.png) ![cbow vs skipgram](https://fasttext.cc/img/cbo_vs_skipgram.png)
To train a cbow model with fastText, you run the following command: 要使用 fastText 训练 cbow 模型,请运行下面这个命令:
```bash ```bash
./fasttext cbow -input data/fil9 -output result/fil9 ./fasttext cbow -input data/fil9 -output result/fil9
``` ```
通过这个练习,我们观察到 skipgram 模型会比 cbow 模型在 subword information 上效果更好
在实践中,我们观察到 skipgram 模型比 cbow 在子词信息方面效果更好。
In practice, we observe that skipgram models works better with subword information than cbow. ## 高级读者:调整参数
## Advanced readers: playing with the parameters
So far, we run fastText with the default parameters, but depending on the data, these parameters may not be optimal. Let us give an introduction to some of the key parameters for word vectors. 到目前为止,我们使用默认参数运行 fastText,但根据数据,这些参数可能不是最优的。 让我们介绍一下词向量的一些关键参数。
The most important parameters of the model are its dimension and the range of size for the subwords. The dimension (*dim*) controls the size of the vectors, the larger they are the more information they can capture but requires more data to be learned. But, if they are too large, they are harder and slower to train. By default, we use 100 dimensions, but any value in the 100-300 range is as popular. The subwords are all the substrings contained in a word between the minimum size (*minn*) and the maximal size (*maxn*). By default, we take all the subword between 3 and 6 characters, but other range could be more appropriate to different languages: 模型的最重要的参数是维度和子词的大小范围。 维度(*dim*)控制向量的大小,维度越多,它们就需要学习更多的数据来获取越多的信息。 但是,如果它们太大,就会越来越难以训练。 默认情况下,我们使用 100个 维度,一般情况下使用 100 到 300 范围内中的值。 子词是包含在最小尺寸(*minn*)和最大尺寸(*maxn*)之间的字中的所有子字符串。 默认情况下,我们取 3 到 6 个字符的所有子词,但不同语言的适用范围可能不同:
```bash ```bash
$ ./fasttext skipgram -input data/fil9 -output result/fil9 -minn 2 -maxn 5 -dim 300 $ ./fasttext skipgram -input data/fil9 -output result/fil9 -minn 2 -maxn 5 -dim 300
``` ```
Depending on the quantity of data you have, you may want to change the parameters of the training. The *epoch* parameter controls how many time will loop over your data. By default, we loop over the dataset 5 times. If you dataset is extremely massive, you may want to loop over it less often. Another important parameter is the learning rate -*lr*). The higher the learning rate is, the faster the model converge to a solution but at the risk of overfitting to the dataset. The default value is 0.05 which is a good compromise. If you want to play with it we suggest to stay in the range of [0.01, 1]: 根据您已有的数据量,您可能需要更改训练参数。 *epoch* 参数控制将循环多少次的数据。 默认情况下,我们遍历数据集 5 次。 如果你的数据集非常庞大,你可能希望更少地循环它。另一个重要参数是学习率 -*lr*)。 学习率越高,模型收敛到最优解的速度越快,但过拟合数据集的风险也越高。 默认值是 0.05,这是一个很好的折中值。 如果你想调整它,我们建议留在 [0.01,1] 的范围内:
```bash ```bash
$ ./fasttext skipgram -input data/fil9 -output result/fil9 -epoch 1 -lr 0.5 $ ./fasttext skipgram -input data/fil9 -output result/fil9 -epoch 1 -lr 0.5
``` ```
Finally , fastText is multi-threaded and uses 12 threads by default. If you have less CPU cores (say 4), you can easily set the number of threads using the *thread* flag: 最后,fastText 是多线程的,默认使用 12 个线程。 如果 CPU 核心数较少(只有 4 个),则可以使用 *thread* 参数轻松设置线程数:
```bash ```bash
$ ./fasttext skipgram -input data/fil9 -output result/fil9 -thread 4 $ ./fasttext skipgram -input data/fil9 -output result/fil9 -thread 4
``` ```
## 打印词向量
## Printing word vectors 直接从 `fil9.vec` 文件中搜索和打印词向量非常麻烦。 幸运的是,fastText 中有一个 `print-word-vectors` 功能。
Searching and printing word vectors directly from the `fil9.vec` file is cumbersome. Fortunately, there is a `print-word-vectors` functionality in fastText.
For examples, we can print the word vectors of words *asparagus,* *pidgey* and *yellow* with the following command: 例如,我们可以使用以下命令打印词 *asparagus**pidgey**yellow* 的词向量:
```bash ```bash
$ echo "asparagus pidgey yellow" | ./fasttext print-word-vectors result/fil9.bin $ echo "asparagus pidgey yellow" | ./fasttext print-word-vectors result/fil9.bin
...@@ -123,30 +120,28 @@ pidgey -0.16065 -0.45867 0.10565 0.036952 -0.11482 0.030053 0.12115 0.39725 ... ...@@ -123,30 +120,28 @@ pidgey -0.16065 -0.45867 0.10565 0.036952 -0.11482 0.030053 0.12115 0.39725 ...
yellow -0.39965 -0.41068 0.067086 -0.034611 0.15246 -0.12208 -0.040719 -0.30155 ... yellow -0.39965 -0.41068 0.067086 -0.034611 0.15246 -0.12208 -0.040719 -0.30155 ...
``` ```
A nice feature is that you can also query for words that did not appear in your data! Indeed words are represented by the sum of its substrings. As long as the unknown word is made of known substrings, there is a representation of it! 一个很好的功能是,您还可以查询没有出现在数据中的单词! 实际上,单词由其子串的总和来表示。 只要未知单词是由已知的子串组成的,就有它的表示形式!
As an example let's try with a misspelled word: 这里有一个例子,让我们尝试拼写错误的单词:
```bash ```bash
$ echo "enviroment" | ./fasttext print-word-vectors result/fil9.bin $ echo "enviroment" | ./fasttext print-word-vectors result/fil9.bin
``` ```
You still get a word vector for it! But how good it is? Let s find out in the next sections! 你仍然得到一个单词向量! 但它有多棒? 让我们在下一节中揭晓!
## 最近邻查询
## Nearest neighbor queries 查看最近邻是检查词向量效果的一种简单方法。 这给出了向量能够获取的语义信息类型的直觉。
A simple way to check the quality of a word vector is to look at its nearest neighbors. This give an intuition of the type of semantic information the vectors are able to capture. 这可以通过 *nn* 功能来实现。 例如,我们可以通过运行以下命令来查询单词的最近邻:
This can be achieve with the *nn *functionality. For example, we can query the 10 nearest neighbors of a word by running the following command:
```bash ```bash
$ ./fasttext nn result/fil9.bin $ ./fasttext nn result/fil9.bin
Pre-computing word vectors... done. Pre-computing word vectors... done.
``` ```
然后我们会提示输入我们的查询词,让我们试试 *asparagus*
Then we are prompted to type our query word, let us try *asparagus* :
```bash ```bash
Query word? asparagus Query word? asparagus
...@@ -162,7 +157,7 @@ celery 0.774529 ...@@ -162,7 +157,7 @@ celery 0.774529
beets 0.773984 beets 0.773984
``` ```
Nice! It seems that vegetable vectors are similar. Note that the nearest neighbor is the word *asparagus* itself, this means that this word appeared in the dataset. What about pokemons? 太好了! 看来 vegetable 的向量是相似的。 请注意,最近邻是 *asparagus* 本身,这意味着这个词出现在数据集中。 那么 pokemons?
```bash ```bash
Query word? pidgey Query word? pidgey
...@@ -178,7 +173,7 @@ beedrill 0.741579 ...@@ -178,7 +173,7 @@ beedrill 0.741579
charmeleon 0.733625 charmeleon 0.733625
``` ```
Different evolution of the same Pokemon have close-by vectors! But what about our misspelled word, is its vector close to anything reasonable? Let s find out: 相同 pokemons 的不同进化有紧邻的向量! 但是我们拼错的单词呢,它的向量接近于任何合理的东西吗? 让我们看看:
```bash ```bash
Query word? enviroment Query word? enviroment
...@@ -194,17 +189,17 @@ acclimatation 0.697196 ...@@ -194,17 +189,17 @@ acclimatation 0.697196
ecotourism 0.697081 ecotourism 0.697081
``` ```
Thanks to the information contained within the word, the vector of our misspelled word matches to reasonable words! It is not perfect but the main information has been captured. 由于这个词中包含的信息,我们拼写错误的单词的向量搭配到了匹配合理的单词! 虽然这并不完美,但主要信息已经获取到了。
## Advanced reader: measure of similarity ## 高级读者:计算相似度
In order to find nearest neighbors, we need to compute a similarity score between words. Our words are represented by continuous word vectors and we can thus apply simple similarities to them. In particular we use the cosine of the angles between two vectors. This similarity is computed for all words in the vocabulary, and the 10 most similar words are shown. Of course, if the word appears in the vocabulary, it will appear on top, with a similarity of 1. 为了找到最近邻,我们需要计算单词之间的相似度分数。 我们的单词用连续的词向量来表示,因此我们可以对它们应用简单的相似性。 尤其我们使用两个向量之间角度的余弦。 计算词汇表中所有单词的相似度,并显示 10 个最相似的单词。 当然,如果单词出现在词汇表中,它将出现在顶部,其相似度为 1。
## Word analogies ## 字的类比
In a similar spirit, one can play around with word analogies. For example, we can see if our model can guess what is to France, what Berlin is to Germany. 用类比的思想,我们可以进行词的类比。 例如,我们可以看到我们的模型是否可以根据柏林是德国的首都来猜测法国的首都是什么,
This can be done with the *analogies *functionality. It takes a word triplet (like *Germany Berlin France*) and outputs the analogy: 这可以通过 *analogies* 功能来完成。 它需要三个词(如*德国柏林法国*)来输出类别结果:
```bash ```bash
$ ./fasttext analogies result/fil9.bin $ ./fasttext analogies result/fil9.bin
...@@ -222,7 +217,7 @@ bordeaux 0.740635 ...@@ -222,7 +217,7 @@ bordeaux 0.740635
pigneaux 0.736122 pigneaux 0.736122
``` ```
The answer provides by our model is *Paris*, which is correct. Let's have a look at a less obvious example: 我们的模型提供了正确的答案 *Paris*。 让我们来看一个不太明显的例子:
```bash ```bash
Query triplet (A - B + C)? psx sony nintendo Query triplet (A - B + C)? psx sony nintendo
...@@ -238,12 +233,11 @@ dreamcast 0.74907 ...@@ -238,12 +233,11 @@ dreamcast 0.74907
famicom 0.745298 famicom 0.745298
``` ```
Our model considers that the *nintendo* analogy of a *psx* is the *gamecube*, which seems reasonable. Of course the quality of the analogies depend on the dataset used to train the model and one can only hope to cover fields only in the dataset. 我们的模型认为 *psx**nintendo* 类比是 *gamecube*,这似乎是合理的。 当然,类比的质量取决于用于训练模型的数据集,并且只能出现存在数据集中的单词。
## Importance of character n-grams ## 字符 n-gram 的重要性
Using subword-level information is particularly interesting to build vectors for unknown words. For example, the word *gearshift* does not exist on Wikipedia but we can still query its closest existing words: 使用子字级信息对于为未知单词构建向量特别有趣。 例如,维基百科上不存在 *gearshift* 这个词,但我们仍然可以查询其最接近的现有词语:
```bash ```bash
Query word? gearshift Query word? gearshift
...@@ -259,17 +253,17 @@ epicycles 0.744268 ...@@ -259,17 +253,17 @@ epicycles 0.744268
gearboxes 0.73986 gearboxes 0.73986
``` ```
Most of the retrieved words share substantial substrings but a few are actually quite different, like *cogwheel*. You can try other words like *sunbathe* or *grandnieces*. 大多数检索的单词共享大量的子字符串,但少数实际上完全不同,如 *cogwheel*。 你可以尝试其他单词,如 *sunbathe**grandnieces*
Now that we have seen the interest of subword information for unknown words, let s check how it compares to a model that do not use subword information. To train a model without no subwords, just run the following command: 现在我们已经看到了未知词的子词信息的兴趣,我们来检查它与不使用子词信息的模型的比较。 要训练没有子词的模型,只需运行以下命令:
```bash ```bash
$ ./fasttext skipgram -input data/fil9 -output result/fil9-none -maxn 0 $ ./fasttext skipgram -input data/fil9 -output result/fil9-none -maxn 0
``` ```
The results are saved in result/fil9-non.vec and result/fil9-non.bin. 结果保存在`result/fil9-non.vec``result/fil9-non.bin`中。
To illustrate the difference, let us take an uncommon word in Wikipedia, like *accomodation* which is a misspelling of *accommodation**.* Here is the nearest neighbors obtained without no subwords: 为了说明这种差异,让我们在维基百科中使用一个不常见的单词,例如 *accomodation*,它是拼写错误的 *accommodation*。这里是没有子词的最近邻:
```bash ```bash
$ ./fasttext nn result/fil9-none.bin $ ./fasttext nn result/fil9-none.bin
...@@ -286,7 +280,7 @@ greenbelts 0.733975 ...@@ -286,7 +280,7 @@ greenbelts 0.733975
asserbo 0.732465 asserbo 0.732465
``` ```
The result does not make much sense, most of these words are unrelated. On the other hand, using subword information gives the following list of nearest neighbors: 结果没有多大意义,大部分这些词都是无关的。 另一方面,使用子字信息给出以下最近邻列表:
```bash ```bash
Query word? accomodation Query word? accomodation
...@@ -302,8 +296,8 @@ accomodate 0.703177 ...@@ -302,8 +296,8 @@ accomodate 0.703177
hospitality 0.701426 hospitality 0.701426
``` ```
The nearest neighbors capture different variation around the word *accommodation*. We also get semantically related words such as *amenities* or *lodging*. 最近邻在词 *accommodation* 附近获取到了不同的变化。 我们还可以获得语义相关的词语,例如 *amenities**lodging*
## Conclusion ## 结论
In this tutorial, we show how to obtain word vectors from Wikipedia. This can be done for any language and you we provide [pre-trained models](https://fasttext.cc/docs/en/pretrained-vectors.html) with the default setting for 294 of them. 在本教程中,我们展示了如何从维基百科获取词向量。 这可以通过任何语言完成,我们提供[预训练模型](https://fasttext.cc/docs/en/pretrained-vectors.html),默认设置为 294。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册