Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
我傻x
bert
提交
bed1f856
B
bert
项目概览
我傻x
/
bert
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
1
Fork
1
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
B
bert
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
未验证
提交
bed1f856
编写于
10月 31, 2018
作者:
C
cbockman
提交者:
GitHub
10月 31, 2018
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
minor README spelling
preemptable -> preemtible
上级
fe354751
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
README.md
README.md
+1
-1
未找到文件。
README.md
浏览文件 @
bed1f856
...
...
@@ -671,7 +671,7 @@ accuracy numbers.
*
If you are pre-training from scratch, be prepared that pre-training is
computationally expensive, especially on GPUs. If you are pre-training from
scratch, our recommended recipe is to pre-train a
`BERT-Base`
on a single
[
preempt
a
ble Cloud TPU v2
](
https://cloud.google.com/tpu/docs/pricing
)
, which
[
preempt
i
ble Cloud TPU v2
](
https://cloud.google.com/tpu/docs/pricing
)
, which
takes about 2 weeks at a cost of about $500 USD (based on the pricing in
October 2018). You will have to scale down the batch size when only training
on a single Cloud TPU, compared to what was used in the paper. It is
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录