Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
我傻x
bert
提交
e13c1f34
B
bert
项目概览
我傻x
/
bert
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
1
Fork
1
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
B
bert
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
e13c1f34
编写于
11月 05, 2018
作者:
J
Jacob Devlin
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fixing typo in function name and updating README
上级
2f82f216
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
27 addition
and
16 deletion
+27
-16
README.md
README.md
+19
-5
extract_features.py
extract_features.py
+1
-1
modeling.py
modeling.py
+1
-1
run_classifier.py
run_classifier.py
+2
-3
run_pretraining.py
run_pretraining.py
+2
-3
run_squad.py
run_squad.py
+2
-3
未找到文件。
README.md
浏览文件 @
e13c1f34
# BERT
**
\*\*\*\*\*
New November 5th, 2018: Third-party PyTorch version of BERT
available
\*\*\*\*\*
**
NLP researchers from HuggingFace made a
[
PyTorch version of BERT available
](
https://github.com/huggingface/pytorch-pretrained-BERT
)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. (Thanks!) We were not involved in the creation or maintenance of
the PyTorch implementation so please direct any questions towards the authors of
that repository.
**
\*\*\*\*\*
New November 3rd, 2018: Multilingual and Chinese models available
\*\*\*\*\*
**
...
...
@@ -63,8 +73,8 @@ minutes.
## What is BERT?
BERT is
method of pre-training language representations, meaning that we train a
general-purpose "language understanding" model on a large text corpus (like
BERT is
a method of pre-training language representations, meaning that we train
a
general-purpose "language understanding" model on a large text corpus (like
Wikipedia), and then use that model for downstream NLP tasks that we care about
(like question answering). BERT outperforms previous methods because it is the
first
*unsupervised*
,
*deeply bidirectional*
system for pre-training NLP.
...
...
@@ -778,9 +788,13 @@ information.
#### Is there a PyTorch version available?
There is no official PyTorch implementation. If someone creates a line-for-line
PyTorch reimplementation so that our pre-trained checkpoints can be directly
converted, we would be happy to link to that PyTorch version here.
There is no official PyTorch implementation. However, NLP researchers from
HuggingFace made a
[
PyTorch version of BERT available
](
https://github.com/huggingface/pytorch-pretrained-BERT
)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. We were not involved in the creation or maintenance of the PyTorch
implementation so please direct any questions towards the authors of that
repository.
#### Will models in other languages be released?
...
...
extract_features.py
浏览文件 @
e13c1f34
...
...
@@ -170,7 +170,7 @@ def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
tvars
=
tf
.
trainable_variables
()
scaffold_fn
=
None
(
assignment_map
,
_
)
=
modeling
.
get_assigment_map_from_checkpoint
(
(
assignment_map
,
_
)
=
modeling
.
get_assig
n
ment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
if
use_tpu
:
...
...
modeling.py
浏览文件 @
e13c1f34
...
...
@@ -315,7 +315,7 @@ def get_activation(activation_string):
raise
ValueError
(
"Unsupported activation: %s"
%
act
)
def
get_assigment_map_from_checkpoint
(
tvars
,
init_checkpoint
):
def
get_assig
n
ment_map_from_checkpoint
(
tvars
,
init_checkpoint
):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map
=
{}
initialized_variable_names
=
{}
...
...
run_classifier.py
浏览文件 @
e13c1f34
...
...
@@ -571,9 +571,8 @@ def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,
scaffold_fn
=
None
if
init_checkpoint
:
(
assignment_map
,
initialized_variable_names
)
=
modeling
.
get_assigment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
(
assignment_map
,
initialized_variable_names
)
=
modeling
.
get_assignment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
if
use_tpu
:
def
tpu_scaffold
():
...
...
run_pretraining.py
浏览文件 @
e13c1f34
...
...
@@ -152,9 +152,8 @@ def model_fn_builder(bert_config, init_checkpoint, learning_rate,
initialized_variable_names
=
{}
scaffold_fn
=
None
if
init_checkpoint
:
(
assignment_map
,
initialized_variable_names
)
=
modeling
.
get_assigment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
(
assignment_map
,
initialized_variable_names
)
=
modeling
.
get_assignment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
if
use_tpu
:
def
tpu_scaffold
():
...
...
run_squad.py
浏览文件 @
e13c1f34
...
...
@@ -576,9 +576,8 @@ def model_fn_builder(bert_config, init_checkpoint, learning_rate,
initialized_variable_names
=
{}
scaffold_fn
=
None
if
init_checkpoint
:
(
assignment_map
,
initialized_variable_names
)
=
modeling
.
get_assigment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
(
assignment_map
,
initialized_variable_names
)
=
modeling
.
get_assignment_map_from_checkpoint
(
tvars
,
init_checkpoint
)
if
use_tpu
:
def
tpu_scaffold
():
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录