未验证 提交 76adcc80 编写于 作者: C chenxujun 提交者: GitHub

Fix typos, test=document_fix (#51775)

上级 81f3f6b5
...@@ -34,7 +34,7 @@ The root cause of poor reusability is the inflexibility of the original Op archi ...@@ -34,7 +34,7 @@ The root cause of poor reusability is the inflexibility of the original Op archi
After the release of Paddle 2.0, it has received many feedbacks from internal and external users that the performance of the dynamic graph is several times lower than that of competing products in the execution scenario of small model on CPU. After the release of Paddle 2.0, it has received many feedbacks from internal and external users that the performance of the dynamic graph is several times lower than that of competing products in the execution scenario of small model on CPU.
The main reason for this problem is: the execution path of the C++ side of the Padddle dynamic graph is relatively long and the scheduling overhead is relatively heavy, which is related to the early design of the dynamic graph which is compatible with the static graph and inherits many object construction processes of the static graph Op. The main reason for this problem is: the execution path of the C++ side of the Paddle dynamic graph is relatively long and the scheduling overhead is relatively heavy, which is related to the early design of the dynamic graph which is compatible with the static graph and inherits many object construction processes of the static graph Op.
Therefore, the dynamic graph needs to be upgraded to a function-based scheduling architecture, and this problem can be solved by abandoning the original complex Op architecture, which depends on the OpKernel being changed to a functional writing method. Therefore, the dynamic graph needs to be upgraded to a function-based scheduling architecture, and this problem can be solved by abandoning the original complex Op architecture, which depends on the OpKernel being changed to a functional writing method.
...@@ -213,7 +213,7 @@ void ScaleKernel(const Context& dev_ctx, ...@@ -213,7 +213,7 @@ void ScaleKernel(const Context& dev_ctx,
##### 2.3.1.3 IntArray ##### 2.3.1.3 IntArray
IntArray is an integer type array that can be constructed from `vector<int>`, `Tensor` and `vector<Tensor>`. Currently, it is mainly used to represent dimension index variables such as `shape`, `index` and `aixs`. IntArray is an integer type array that can be constructed from `vector<int>`, `Tensor` and `vector<Tensor>`. Currently, it is mainly used to represent dimension index variables such as `shape`, `index` and `axis`.
Taking `FullKernel` as an example, the shape parameter is used to indicate the dimension information of the returned Tensor (e.g. [2, 8, 8]). When calling `FullKernel`, the parameters of `vector<int>`, `Tensor` and `vector<Tensor>` type variables can be used to complete the call. Using `IntArray` avoids the problem of writing a separate overloaded function for each shape type. Taking `FullKernel` as an example, the shape parameter is used to indicate the dimension information of the returned Tensor (e.g. [2, 8, 8]). When calling `FullKernel`, the parameters of `vector<int>`, `Tensor` and `vector<Tensor>` type variables can be used to complete the call. Using `IntArray` avoids the problem of writing a separate overloaded function for each shape type.
...@@ -701,7 +701,7 @@ PD_DECLARE_KERNEL(as_real, CPU, ALL_LAYOUT); ...@@ -701,7 +701,7 @@ PD_DECLARE_KERNEL(as_real, CPU, ALL_LAYOUT);
... ...
``` ```
For the specific implementation of `kernel_declare`, please refer to the function implementation in `camke/phi.cmake`, which will not be introduced here. For the specific implementation of `kernel_declare`, please refer to the function implementation in `cmake/phi.cmake`, which will not be introduced here.
##### 2.3.5.2 Kernel dependencies ##### 2.3.5.2 Kernel dependencies
......
...@@ -22,7 +22,7 @@ const std::string& GetKernelTypeForVarContext::GetVarName(void) const { ...@@ -22,7 +22,7 @@ const std::string& GetKernelTypeForVarContext::GetVarName(void) const {
var_name_, var_name_,
nullptr, nullptr,
errors::InvalidArgument( errors::InvalidArgument(
"Variablle name is null. The context hasn't been initialized. ")); "Variable name is null. The context hasn't been initialized. "));
return *var_name_; return *var_name_;
} }
......
...@@ -115,7 +115,7 @@ class DefaultKernelSignatureMap { ...@@ -115,7 +115,7 @@ class DefaultKernelSignatureMap {
Has(op_type), Has(op_type),
true, true,
phi::errors::AlreadyExists( phi::errors::AlreadyExists(
"Operator (%s)'s Kernel Siginature has been registered.", op_type)); "Operator (%s)'s Kernel Signature has been registered.", op_type));
map_.insert({std::move(op_type), std::move(signature)}); map_.insert({std::move(op_type), std::move(signature)});
} }
...@@ -160,7 +160,7 @@ class OpUtilsMap { ...@@ -160,7 +160,7 @@ class OpUtilsMap {
arg_mapping_fn_map_.count(op_type), arg_mapping_fn_map_.count(op_type),
0UL, 0UL,
phi::errors::AlreadyExists( phi::errors::AlreadyExists(
"Operator (%s)'s argu,emt mapping function has been registered.", "Operator (%s)'s argument mapping function has been registered.",
op_type)); op_type));
arg_mapping_fn_map_.insert({std::move(op_type), std::move(fn)}); arg_mapping_fn_map_.insert({std::move(op_type), std::move(fn)});
} }
......
...@@ -87,7 +87,7 @@ void MasterDaemon::_notify_waiting_sockets(const std::string& key) { ...@@ -87,7 +87,7 @@ void MasterDaemon::_notify_waiting_sockets(const std::string& key) {
if (_waiting_sockets.find(key) != _waiting_sockets.end()) { if (_waiting_sockets.find(key) != _waiting_sockets.end()) {
for (auto waiting_socket : _waiting_sockets.at(key)) { for (auto waiting_socket : _waiting_sockets.at(key)) {
auto reply = ReplyType::STOP_WAIT; auto reply = ReplyType::STOP_WAIT;
VLOG(3) << "TCPStore: nofify the socket: " << GetSockName(waiting_socket) VLOG(3) << "TCPStore: notify the socket: " << GetSockName(waiting_socket)
<< " that key: " << key << " is ready."; << " that key: " << key << " is ready.";
tcputils::send_value<ReplyType>(waiting_socket, reply); tcputils::send_value<ReplyType>(waiting_socket, reply);
} }
......
...@@ -172,7 +172,7 @@ SocketType tcp_listen(const std::string host, ...@@ -172,7 +172,7 @@ SocketType tcp_listen(const std::string host,
PADDLE_ENFORCE_GT(sockfd, PADDLE_ENFORCE_GT(sockfd,
0, 0,
phi::errors::InvalidArgument( phi::errors::InvalidArgument(
"Bind network on %s:%s failedd.", node, port)); "Bind network on %s:%s failed.", node, port));
::listen(sockfd, LISTENQ); ::listen(sockfd, LISTENQ);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册