提交 e13c1f34 编写于 作者: J Jacob Devlin

Fixing typo in function name and updating README

上级 2f82f216
# BERT
**\*\*\*\*\* New November 5th, 2018: Third-party PyTorch version of BERT
available \*\*\*\*\***
NLP researchers from HuggingFace made a
[PyTorch version of BERT available](https://github.com/huggingface/pytorch-pretrained-BERT)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. (Thanks!) We were not involved in the creation or maintenance of
the PyTorch implementation so please direct any questions towards the authors of
that repository.
**\*\*\*\*\* New November 3rd, 2018: Multilingual and Chinese models available
\*\*\*\*\***
......@@ -63,8 +73,8 @@ minutes.
## What is BERT?
BERT is method of pre-training language representations, meaning that we train a
general-purpose "language understanding" model on a large text corpus (like
BERT is a method of pre-training language representations, meaning that we train
a general-purpose "language understanding" model on a large text corpus (like
Wikipedia), and then use that model for downstream NLP tasks that we care about
(like question answering). BERT outperforms previous methods because it is the
first *unsupervised*, *deeply bidirectional* system for pre-training NLP.
......@@ -778,9 +788,13 @@ information.
#### Is there a PyTorch version available?
There is no official PyTorch implementation. If someone creates a line-for-line
PyTorch reimplementation so that our pre-trained checkpoints can be directly
converted, we would be happy to link to that PyTorch version here.
There is no official PyTorch implementation. However, NLP researchers from
HuggingFace made a
[PyTorch version of BERT available](https://github.com/huggingface/pytorch-pretrained-BERT)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. We were not involved in the creation or maintenance of the PyTorch
implementation so please direct any questions towards the authors of that
repository.
#### Will models in other languages be released?
......
......@@ -170,7 +170,7 @@ def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
tvars = tf.trainable_variables()
scaffold_fn = None
(assignment_map, _) = modeling.get_assigment_map_from_checkpoint(
(assignment_map, _) = modeling.get_assignment_map_from_checkpoint(
tvars, init_checkpoint)
if use_tpu:
......
......@@ -315,7 +315,7 @@ def get_activation(activation_string):
raise ValueError("Unsupported activation: %s" % act)
def get_assigment_map_from_checkpoint(tvars, init_checkpoint):
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map = {}
initialized_variable_names = {}
......
......@@ -571,9 +571,8 @@ def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,
scaffold_fn = None
if init_checkpoint:
(assignment_map,
initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
tvars, init_checkpoint)
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
......
......@@ -152,9 +152,8 @@ def model_fn_builder(bert_config, init_checkpoint, learning_rate,
initialized_variable_names = {}
scaffold_fn = None
if init_checkpoint:
(assignment_map,
initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
tvars, init_checkpoint)
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
......
......@@ -576,9 +576,8 @@ def model_fn_builder(bert_config, init_checkpoint, learning_rate,
initialized_variable_names = {}
scaffold_fn = None
if init_checkpoint:
(assignment_map,
initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
tvars, init_checkpoint)
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册