未验证 提交 57f95283 编写于 作者: A Ammar Asmro 提交者: GitHub

Fix (are -> care) typo in README.md

上级 aff0e6a7
......@@ -43,7 +43,7 @@ minutes.
BERT is method of pre-training language representations, meaning that we train a
general-purpose "language understanding" model on a large text corpus (like
Wikipedia), and then use that model for downstream NLP tasks that we are about
Wikipedia), and then use that model for downstream NLP tasks that we care about
(like question answering). BERT outperforms previous methods because it is the
first *unsupervised*, *deeply bidirectional* system for pre-training NLP.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册