Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
机器未来
Paddle
提交
52b45007
P
Paddle
项目概览
机器未来
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
52b45007
编写于
9月 26, 2021
作者:
Z
zhangkaihuo
提交者:
GitHub
9月 26, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update multi_dot exposure rules (#36018)
上级
c330c3d9
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
80 addition
and
78 deletion
+80
-78
python/paddle/__init__.py
python/paddle/__init__.py
+0
-1
python/paddle/fluid/tests/unittests/test_multi_dot_op.py
python/paddle/fluid/tests/unittests/test_multi_dot_op.py
+10
-8
python/paddle/tensor/__init__.py
python/paddle/tensor/__init__.py
+1
-0
python/paddle/tensor/linalg.py
python/paddle/tensor/linalg.py
+69
-69
未找到文件。
python/paddle/__init__.py
浏览文件 @
52b45007
...
@@ -103,7 +103,6 @@ from .tensor.linalg import histogram # noqa: F401
...
@@ -103,7 +103,6 @@ from .tensor.linalg import histogram # noqa: F401
from
.tensor.linalg
import
mv
# noqa: F401
from
.tensor.linalg
import
mv
# noqa: F401
from
.tensor.linalg
import
det
# noqa: F401
from
.tensor.linalg
import
det
# noqa: F401
from
.tensor.linalg
import
slogdet
# noqa: F401
from
.tensor.linalg
import
slogdet
# noqa: F401
from
.tensor.linalg
import
multi_dot
# noqa: F401
from
.tensor.linalg
import
matrix_power
# noqa: F401
from
.tensor.linalg
import
matrix_power
# noqa: F401
from
.tensor.linalg
import
svd
# noqa: F401
from
.tensor.linalg
import
svd
# noqa: F401
from
.tensor.linalg
import
solve
# noqa: F401
from
.tensor.linalg
import
solve
# noqa: F401
...
...
python/paddle/fluid/tests/unittests/test_multi_dot_op.py
浏览文件 @
52b45007
...
@@ -198,32 +198,34 @@ class TestMultiDotOpError(unittest.TestCase):
...
@@ -198,32 +198,34 @@ class TestMultiDotOpError(unittest.TestCase):
paddle
.
static
.
Program
()):
paddle
.
static
.
Program
()):
# The inputs type of multi_dot must be list matrix.
# The inputs type of multi_dot must be list matrix.
input1
=
12
input1
=
12
self
.
assertRaises
(
TypeError
,
paddle
.
multi_dot
,
[
input1
,
input1
])
self
.
assertRaises
(
TypeError
,
paddle
.
linalg
.
multi_dot
,
[
input1
,
input1
])
# The inputs dtype of multi_dot must be float64, float64 or float16.
# The inputs dtype of multi_dot must be float64, float64 or float16.
input2
=
paddle
.
static
.
data
(
input2
=
paddle
.
static
.
data
(
name
=
'input2'
,
shape
=
[
10
,
10
],
dtype
=
"int32"
)
name
=
'input2'
,
shape
=
[
10
,
10
],
dtype
=
"int32"
)
self
.
assertRaises
(
TypeError
,
paddle
.
multi_dot
,
[
input2
,
input2
])
self
.
assertRaises
(
TypeError
,
paddle
.
linalg
.
multi_dot
,
[
input2
,
input2
])
# the number of tensor must be larger than 1
# the number of tensor must be larger than 1
x0
=
paddle
.
static
.
data
(
name
=
'x0'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x0
=
paddle
.
static
.
data
(
name
=
'x0'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
self
.
assertRaises
(
ValueError
,
paddle
.
multi_dot
,
[
x0
])
self
.
assertRaises
(
ValueError
,
paddle
.
linalg
.
multi_dot
,
[
x0
])
#the first tensor must be 1D or 2D
#the first tensor must be 1D or 2D
x1
=
paddle
.
static
.
data
(
name
=
'x1'
,
shape
=
[
3
,
2
,
3
],
dtype
=
"float64"
)
x1
=
paddle
.
static
.
data
(
name
=
'x1'
,
shape
=
[
3
,
2
,
3
],
dtype
=
"float64"
)
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x2
=
paddle
.
static
.
data
(
name
=
'x2'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
self
.
assertRaises
(
ValueError
,
paddle
.
multi_dot
,
[
x1
,
x2
])
self
.
assertRaises
(
ValueError
,
paddle
.
linalg
.
multi_dot
,
[
x1
,
x2
])
#the last tensor must be 1D or 2D
#the last tensor must be 1D or 2D
x3
=
paddle
.
static
.
data
(
name
=
'x3'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x3
=
paddle
.
static
.
data
(
name
=
'x3'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x4
=
paddle
.
static
.
data
(
name
=
'x4'
,
shape
=
[
3
,
2
,
2
],
dtype
=
"float64"
)
x4
=
paddle
.
static
.
data
(
name
=
'x4'
,
shape
=
[
3
,
2
,
2
],
dtype
=
"float64"
)
self
.
assertRaises
(
ValueError
,
paddle
.
multi_dot
,
[
x3
,
x4
])
self
.
assertRaises
(
ValueError
,
paddle
.
linalg
.
multi_dot
,
[
x3
,
x4
])
#the tensor must be 2D, except first and last tensor
#the tensor must be 2D, except first and last tensor
x5
=
paddle
.
static
.
data
(
name
=
'x5'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x5
=
paddle
.
static
.
data
(
name
=
'x5'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x6
=
paddle
.
static
.
data
(
name
=
'x6'
,
shape
=
[
2
],
dtype
=
"float64"
)
x6
=
paddle
.
static
.
data
(
name
=
'x6'
,
shape
=
[
2
],
dtype
=
"float64"
)
x7
=
paddle
.
static
.
data
(
name
=
'x7'
,
shape
=
[
2
,
2
],
dtype
=
"float64"
)
x7
=
paddle
.
static
.
data
(
name
=
'x7'
,
shape
=
[
2
,
2
],
dtype
=
"float64"
)
self
.
assertRaises
(
ValueError
,
paddle
.
multi_dot
,
[
x5
,
x6
,
x7
])
self
.
assertRaises
(
ValueError
,
paddle
.
linalg
.
multi_dot
,
[
x5
,
x6
,
x7
])
class
APITestMultiDot
(
unittest
.
TestCase
):
class
APITestMultiDot
(
unittest
.
TestCase
):
...
@@ -232,7 +234,7 @@ class APITestMultiDot(unittest.TestCase):
...
@@ -232,7 +234,7 @@ class APITestMultiDot(unittest.TestCase):
with
paddle
.
static
.
program_guard
(
paddle
.
static
.
Program
()):
with
paddle
.
static
.
program_guard
(
paddle
.
static
.
Program
()):
x0
=
paddle
.
static
.
data
(
name
=
'x0'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x0
=
paddle
.
static
.
data
(
name
=
'x0'
,
shape
=
[
3
,
2
],
dtype
=
"float64"
)
x1
=
paddle
.
static
.
data
(
name
=
'x1'
,
shape
=
[
2
,
3
],
dtype
=
'float64'
)
x1
=
paddle
.
static
.
data
(
name
=
'x1'
,
shape
=
[
2
,
3
],
dtype
=
'float64'
)
result
=
paddle
.
multi_dot
([
x0
,
x1
])
result
=
paddle
.
linalg
.
multi_dot
([
x0
,
x1
])
exe
=
paddle
.
static
.
Executor
(
paddle
.
CPUPlace
())
exe
=
paddle
.
static
.
Executor
(
paddle
.
CPUPlace
())
data1
=
np
.
random
.
rand
(
3
,
2
).
astype
(
"float64"
)
data1
=
np
.
random
.
rand
(
3
,
2
).
astype
(
"float64"
)
data2
=
np
.
random
.
rand
(
2
,
3
).
astype
(
"float64"
)
data2
=
np
.
random
.
rand
(
2
,
3
).
astype
(
"float64"
)
...
@@ -254,7 +256,7 @@ class APITestMultiDot(unittest.TestCase):
...
@@ -254,7 +256,7 @@ class APITestMultiDot(unittest.TestCase):
input_array2
=
np
.
random
.
rand
(
4
,
3
).
astype
(
"float64"
)
input_array2
=
np
.
random
.
rand
(
4
,
3
).
astype
(
"float64"
)
data1
=
paddle
.
to_tensor
(
input_array1
)
data1
=
paddle
.
to_tensor
(
input_array1
)
data2
=
paddle
.
to_tensor
(
input_array2
)
data2
=
paddle
.
to_tensor
(
input_array2
)
out
=
paddle
.
multi_dot
([
data1
,
data2
])
out
=
paddle
.
linalg
.
multi_dot
([
data1
,
data2
])
expected_result
=
np
.
linalg
.
multi_dot
([
input_array1
,
input_array2
])
expected_result
=
np
.
linalg
.
multi_dot
([
input_array1
,
input_array2
])
self
.
assertTrue
(
np
.
allclose
(
expected_result
,
out
.
numpy
()))
self
.
assertTrue
(
np
.
allclose
(
expected_result
,
out
.
numpy
()))
...
...
python/paddle/tensor/__init__.py
浏览文件 @
52b45007
...
@@ -387,6 +387,7 @@ tensor_method_func = [ #noqa
...
@@ -387,6 +387,7 @@ tensor_method_func = [ #noqa
'bitwise_not'
,
'bitwise_not'
,
'broadcast_tensors'
,
'broadcast_tensors'
,
'uniform_'
,
'uniform_'
,
'multi_dot'
,
'solve'
,
'solve'
,
]
]
...
...
python/paddle/tensor/linalg.py
浏览文件 @
52b45007
...
@@ -551,8 +551,8 @@ def cond(x, p=None, name=None):
...
@@ -551,8 +551,8 @@ def cond(x, p=None, name=None):
Computes the condition number of a matrix or batches of matrices with respect to a matrix norm ``p``.
Computes the condition number of a matrix or batches of matrices with respect to a matrix norm ``p``.
Args:
Args:
x (Tensor): The input tensor could be tensor of shape ``(*, m, n)`` where ``*`` is zero or more batch dimensions
x (Tensor): The input tensor could be tensor of shape ``(*, m, n)`` where ``*`` is zero or more batch dimensions
for ``p`` in ``(2, -2)``, or of shape ``(*, n, n)`` where every matrix is invertible for any supported ``p``.
for ``p`` in ``(2, -2)``, or of shape ``(*, n, n)`` where every matrix is invertible for any supported ``p``.
And the input data type could be ``float32`` or ``float64``.
And the input data type could be ``float32`` or ``float64``.
p (float|string, optional): Order of the norm. Supported values are `fro`, `nuc`, `1`, `-1`, `2`, `-2`,
p (float|string, optional): Order of the norm. Supported values are `fro`, `nuc`, `1`, `-1`, `2`, `-2`,
`inf`, `-inf`. Default value is `None`, meaning that the order of the norm is `2`.
`inf`, `-inf`. Default value is `None`, meaning that the order of the norm is `2`.
...
@@ -607,7 +607,7 @@ def cond(x, p=None, name=None):
...
@@ -607,7 +607,7 @@ def cond(x, p=None, name=None):
# out_minus_inf.numpy() [1.]
# out_minus_inf.numpy() [1.]
a = paddle.to_tensor(np.random.randn(2, 4, 4).astype('float32'))
a = paddle.to_tensor(np.random.randn(2, 4, 4).astype('float32'))
# a.numpy()
# a.numpy()
# [[[ 0.14063153 -0.996288 0.7996131 -0.02571543]
# [[[ 0.14063153 -0.996288 0.7996131 -0.02571543]
# [-0.16303636 1.5534962 -0.49919784 -0.04402903]
# [-0.16303636 1.5534962 -0.49919784 -0.04402903]
# [-1.1341571 -0.6022629 0.5445269 0.29154757]
# [-1.1341571 -0.6022629 0.5445269 0.29154757]
...
@@ -975,8 +975,8 @@ def t(input, name=None):
...
@@ -975,8 +975,8 @@ def t(input, name=None):
return
out
return
out
check_variable_and_dtype
(
check_variable_and_dtype
(
input
,
'input'
,
[
'float16'
,
'float32'
,
'float64'
,
'int32'
,
'int64'
],
input
,
'input'
,
[
'float16'
,
'float32'
,
'float64'
,
'int32'
,
'transpose'
)
'int64'
],
'transpose'
)
helper
=
LayerHelper
(
't'
,
**
locals
())
helper
=
LayerHelper
(
't'
,
**
locals
())
out
=
helper
.
create_variable_for_type_inference
(
input
.
dtype
)
out
=
helper
.
create_variable_for_type_inference
(
input
.
dtype
)
...
@@ -1108,17 +1108,17 @@ def matrix_rank(x, tol=None, hermitian=False, name=None):
...
@@ -1108,17 +1108,17 @@ def matrix_rank(x, tol=None, hermitian=False, name=None):
r
"""
r
"""
Computes the rank of a matrix.
Computes the rank of a matrix.
The rank of a matrix is the number of singular values that are greater than the specified `tol` threshold when hermitian=False,
The rank of a matrix is the number of singular values that are greater than the specified `tol` threshold when hermitian=False,
or the number of eigenvalues in absolute value that are greater than the specified `tol` threshold when hermitian=True.
or the number of eigenvalues in absolute value that are greater than the specified `tol` threshold when hermitian=True.
Args:
Args:
x (Tensor): The input tensor. Its shape should be `[..., m, n]`, where `...` is zero or more batch dimensions. If `x` is a batch
x (Tensor): The input tensor. Its shape should be `[..., m, n]`, where `...` is zero or more batch dimensions. If `x` is a batch
of matrices then the output has the same batch dimensions. The data type of `x` should be float32 or float64.
of matrices then the output has the same batch dimensions. The data type of `x` should be float32 or float64.
tol (float,Tensor,optional): the tolerance value. Default: None. If `tol` is not specified, and `sigma` is the largest
tol (float,Tensor,optional): the tolerance value. Default: None. If `tol` is not specified, and `sigma` is the largest
singular value (or eigenvalues in absolute value), and `eps` is the epsilon value for the dtype of `x`, then `tol` is computed
singular value (or eigenvalues in absolute value), and `eps` is the epsilon value for the dtype of `x`, then `tol` is computed
with formula `tol=sigma * max(m,n) * eps`. Note that if `x` is a batch of matrices, `tol` is computed this way for every batch.
with formula `tol=sigma * max(m,n) * eps`. Note that if `x` is a batch of matrices, `tol` is computed this way for every batch.
hermitian (bool,optional): indicates whether `x` is Hermitian. Default: False. When hermitian=True, `x` is assumed to be Hermitian,
hermitian (bool,optional): indicates whether `x` is Hermitian. Default: False. When hermitian=True, `x` is assumed to be Hermitian,
enabling a more efficient method for finding eigenvalues, but `x` is not checked inside the function. Instead, We just use
enabling a more efficient method for finding eigenvalues, but `x` is not checked inside the function. Instead, We just use
the lower triangular of the matrix to compute.
the lower triangular of the matrix to compute.
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
name (str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
...
@@ -1225,7 +1225,7 @@ def bmm(x, y, name=None):
...
@@ -1225,7 +1225,7 @@ def bmm(x, y, name=None):
#output value:
#output value:
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]]
#[[[6.0, 6.0],[12.0, 12.0]],[[45.0, 45.0],[60.0, 60.0]]]
out_np = out.numpy()
out_np = out.numpy()
"""
"""
x_shape
=
x
.
shape
x_shape
=
x
.
shape
y_shape
=
y
.
shape
y_shape
=
y
.
shape
...
@@ -1360,7 +1360,7 @@ def det(x):
...
@@ -1360,7 +1360,7 @@ def det(x):
Returns:
Returns:
y (Tensor):the determinant value of a square matrix or batches of square matrices.
y (Tensor):the determinant value of a square matrix or batches of square matrices.
Example
:
Example
s:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
...
@@ -1370,10 +1370,10 @@ def det(x):
...
@@ -1370,10 +1370,10 @@ def det(x):
A = paddle.det(x)
A = paddle.det(x)
print(A)
print(A)
# [ 0.02547996, 2.52317095, -6.15900707])
# [ 0.02547996, 2.52317095, -6.15900707])
"""
"""
if
in_dygraph_mode
():
if
in_dygraph_mode
():
return
core
.
ops
.
determinant
(
x
)
return
core
.
ops
.
determinant
(
x
)
...
@@ -1403,7 +1403,7 @@ def slogdet(x):
...
@@ -1403,7 +1403,7 @@ def slogdet(x):
"""
"""
Calculates the sign and natural logarithm of the absolute value of a square matrix's or batches square matrices' determinant.
Calculates the sign and natural logarithm of the absolute value of a square matrix's or batches square matrices' determinant.
The determinant can be computed with ``sign * exp(logabsdet)
The determinant can be computed with ``sign * exp(logabsdet)
Supports input of float, double
Supports input of float, double
Note that for matrices that have zero determinant, this returns ``(0, -inf)``
Note that for matrices that have zero determinant, this returns ``(0, -inf)``
...
@@ -1415,7 +1415,7 @@ def slogdet(x):
...
@@ -1415,7 +1415,7 @@ def slogdet(x):
y (Tensor): A tensor containing the sign of the determinant and the natural logarithm
y (Tensor): A tensor containing the sign of the determinant and the natural logarithm
of the absolute value of determinant, respectively.
of the absolute value of determinant, respectively.
Example:
Example
s
:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
...
@@ -1425,7 +1425,7 @@ def slogdet(x):
...
@@ -1425,7 +1425,7 @@ def slogdet(x):
A = paddle.slogdet(x)
A = paddle.slogdet(x)
print(A)
print(A)
# [[ 1. , 1. , -1. ],
# [[ 1. , 1. , -1. ],
# [-0.98610914, -0.43010661, -0.10872950]])
# [-0.98610914, -0.43010661, -0.10872950]])
...
@@ -1461,19 +1461,19 @@ def svd(x, full_matrices=False, name=None):
...
@@ -1461,19 +1461,19 @@ def svd(x, full_matrices=False, name=None):
Let :math:`X` be the input matrix or a batch of input matrices, the output should satisfies:
Let :math:`X` be the input matrix or a batch of input matrices, the output should satisfies:
.. math::
.. math::
X = U * diag(S) * VT
X = U * diag(S) * VT
Args:
Args:
x (Tensor): The input tensor. Its shape should be `[..., N, M]`,
x (Tensor): The input tensor. Its shape should be `[..., N, M]`,
where `...` is zero or more batch dimensions. N and M can be arbitraty
where `...` is zero or more batch dimensions. N and M can be arbitraty
positive number. Note that if x is sigular matrices, the grad is numerical
positive number. Note that if x is sigular matrices, the grad is numerical
instable. The data type of x should be float32 or float64.
instable. The data type of x should be float32 or float64.
full_matrices (bool): A flag to control the behavor of svd.
full_matrices (bool): A flag to control the behavor of svd.
If full_matrices = True, svd op will compute full U and V matrics,
If full_matrices = True, svd op will compute full U and V matrics,
which means shape of U is `[..., N, N]`, shape of V is `[..., M, M]`. K = min(M, N).
which means shape of U is `[..., N, N]`, shape of V is `[..., M, M]`. K = min(M, N).
If full_matrices = False, svd op will use a economic method to store U and V.
If full_matrices = False, svd op will use a economic method to store U and V.
which means shape of U is `[..., N, K]`, shape of V is `[..., M, K]`. K = min(M, N).
which means shape of U is `[..., N, K]`, shape of V is `[..., M, K]`. K = min(M, N).
name (str, optional): Name for the operation (optional, default is None).
name (str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
For more information, please refer to :ref:`api_guide_Name`.
Returns:
Returns:
...
@@ -1497,9 +1497,9 @@ def svd(x, full_matrices=False, name=None):
...
@@ -1497,9 +1497,9 @@ def svd(x, full_matrices=False, name=None):
print (vh)
print (vh)
#VT= [[ 0.51411221, 0.85772294],
#VT= [[ 0.51411221, 0.85772294],
# [ 0.85772294, -0.51411221]]
# [ 0.85772294, -0.51411221]]
# one can verify : U * S * VT == X
# one can verify : U * S * VT == X
# U * UH == I
# U * UH == I
# V * VH == I
# V * VH == I
"""
"""
...
@@ -1526,7 +1526,7 @@ def svd(x, full_matrices=False, name=None):
...
@@ -1526,7 +1526,7 @@ def svd(x, full_matrices=False, name=None):
def
matrix_power
(
x
,
n
,
name
=
None
):
def
matrix_power
(
x
,
n
,
name
=
None
):
r
"""
r
"""
Computes the n-th power of a square matrix or a batch of square matrices.
Computes the n-th power of a square matrix or a batch of square matrices.
Let :math:`X` be a sqaure matrix or a batch of square matrices, :math:`n` be
Let :math:`X` be a sqaure matrix or a batch of square matrices, :math:`n` be
an exponent, the equation should be:
an exponent, the equation should be:
...
@@ -1596,27 +1596,27 @@ def matrix_power(x, n, name=None):
...
@@ -1596,27 +1596,27 @@ def matrix_power(x, n, name=None):
def
eigvals
(
x
,
name
=
None
):
def
eigvals
(
x
,
name
=
None
):
"""
"""
Compute the eigenvalues of one or more general matrices.
Compute the eigenvalues of one or more general matrices.
Warning:
Warning:
The gradient kernel of this operator does not yet developed.
The gradient kernel of this operator does not yet developed.
If you need back propagation through this operator, please replace it with paddle.linalg.eig.
If you need back propagation through this operator, please replace it with paddle.linalg.eig.
Args:
Args:
x (Tensor): A square matrix or a batch of square matrices whose eigenvalues will be computed.
x (Tensor): A square matrix or a batch of square matrices whose eigenvalues will be computed.
Its shape should be `[*, M, M]`, where `*` is zero or more batch dimensions.
Its shape should be `[*, M, M]`, where `*` is zero or more batch dimensions.
Its data type should be float32, float64, complex64, or complex128.
Its data type should be float32, float64, complex64, or complex128.
name (str, optional): Name for the operation (optional, default is None).
name (str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
For more information, please refer to :ref:`api_guide_Name`.
Returns:
Returns:
Tensor: A tensor containing the unsorted eigenvalues which has the same batch dimensions with `x`.
Tensor: A tensor containing the unsorted eigenvalues which has the same batch dimensions with `x`.
The eigenvalues are complex-valued even when `x` is real.
The eigenvalues are complex-valued even when `x` is real.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
import paddle
import paddle
paddle.set_device("cpu")
paddle.set_device("cpu")
paddle.seed(1234)
paddle.seed(1234)
...
@@ -1630,8 +1630,8 @@ def eigvals(x, name=None):
...
@@ -1630,8 +1630,8 @@ def eigvals(x, name=None):
"""
"""
check_variable_and_dtype
(
x
,
'dtype'
,
check_variable_and_dtype
(
x
,
'dtype'
,
[
'float32'
,
'float64'
,
'complex64'
,
'complex128'
],
[
'float32'
,
'float64'
,
'complex64'
,
'eigvals'
)
'complex128'
],
'eigvals'
)
x_shape
=
list
(
x
.
shape
)
x_shape
=
list
(
x
.
shape
)
if
len
(
x_shape
)
<
2
:
if
len
(
x_shape
)
<
2
:
...
@@ -1657,7 +1657,7 @@ def multi_dot(x, name=None):
...
@@ -1657,7 +1657,7 @@ def multi_dot(x, name=None):
"""
"""
Multi_dot is an operator that calculates multiple matrix multiplications.
Multi_dot is an operator that calculates multiple matrix multiplications.
Supports inputs of float
, double and float16
dtypes. This function does not
Supports inputs of float
16(only GPU support), float32 and float64
dtypes. This function does not
support batched inputs.
support batched inputs.
The input tensor in [x] must be 2-D except for the first and last can be 1-D.
The input tensor in [x] must be 2-D except for the first and last can be 1-D.
...
@@ -1699,7 +1699,7 @@ def multi_dot(x, name=None):
...
@@ -1699,7 +1699,7 @@ def multi_dot(x, name=None):
B_data = np.random.random([4, 5]).astype(np.float32)
B_data = np.random.random([4, 5]).astype(np.float32)
A = paddle.to_tensor(A_data)
A = paddle.to_tensor(A_data)
B = paddle.to_tensor(B_data)
B = paddle.to_tensor(B_data)
out = paddle.multi_dot([A, B])
out = paddle.
linalg.
multi_dot([A, B])
print(out.numpy().shape)
print(out.numpy().shape)
# [3, 5]
# [3, 5]
...
@@ -1710,7 +1710,7 @@ def multi_dot(x, name=None):
...
@@ -1710,7 +1710,7 @@ def multi_dot(x, name=None):
A = paddle.to_tensor(A_data)
A = paddle.to_tensor(A_data)
B = paddle.to_tensor(B_data)
B = paddle.to_tensor(B_data)
C = paddle.to_tensor(C_data)
C = paddle.to_tensor(C_data)
out = paddle.multi_dot([A, B, C])
out = paddle.
linalg.
multi_dot([A, B, C])
print(out.numpy().shape)
print(out.numpy().shape)
# [10, 7]
# [10, 7]
...
@@ -1735,7 +1735,7 @@ def multi_dot(x, name=None):
...
@@ -1735,7 +1735,7 @@ def multi_dot(x, name=None):
def
eigh
(
x
,
UPLO
=
'L'
,
name
=
None
):
def
eigh
(
x
,
UPLO
=
'L'
,
name
=
None
):
"""
"""
Compute the eigenvalues and eigenvectors of a
Compute the eigenvalues and eigenvectors of a
complex Hermitian (conjugate symmetric) or a real symmetric matrix.
complex Hermitian (conjugate symmetric) or a real symmetric matrix.
Args:
Args:
...
@@ -1804,7 +1804,7 @@ def eigh(x, UPLO='L', name=None):
...
@@ -1804,7 +1804,7 @@ def eigh(x, UPLO='L', name=None):
def
pinv
(
x
,
rcond
=
1e-15
,
hermitian
=
False
,
name
=
None
):
def
pinv
(
x
,
rcond
=
1e-15
,
hermitian
=
False
,
name
=
None
):
r
"""
r
"""
Calculate pseudo inverse via SVD(singular value decomposition)
Calculate pseudo inverse via SVD(singular value decomposition)
of one matrix or batches of regular matrix.
of one matrix or batches of regular matrix.
.. math::
.. math::
...
@@ -1815,30 +1815,30 @@ def pinv(x, rcond=1e-15, hermitian=False, name=None):
...
@@ -1815,30 +1815,30 @@ def pinv(x, rcond=1e-15, hermitian=False, name=None):
else:
else:
x = u * s * ut (eigh)
x = u * s * ut (eigh)
out = u * 1/s * u.conj().transpose(-2,-1)
out = u * 1/s * u.conj().transpose(-2,-1)
If x is hermitian or symmetric matrix, svd will be replaced with eigh.
If x is hermitian or symmetric matrix, svd will be replaced with eigh.
Args:
Args:
x(Tensor): The input tensor. Its shape should be (*, m, n)
x(Tensor): The input tensor. Its shape should be (*, m, n)
where * is zero or more batch dimensions. m and n can be
where * is zero or more batch dimensions. m and n can be
arbitraty positive number. The data type of x should be
arbitraty positive number. The data type of x should be
float32 or float64 or complex64 or complex128. When data
float32 or float64 or complex64 or complex128. When data
type is complex64 or cpmplex128, hermitian should be set
type is complex64 or cpmplex128, hermitian should be set
True.
True.
rcond(Tensor, optional): the tolerance value to determine
rcond(Tensor, optional): the tolerance value to determine
when is a singular value zero. Defalut:1e-15.
when is a singular value zero. Defalut:1e-15.
hermitian(bool, optional): indicates whether x is Hermitian
hermitian(bool, optional): indicates whether x is Hermitian
if complex or symmetric if real. Default: False.
if complex or symmetric if real. Default: False.
name(str|None): A name for this layer(optional). If set None,
name(str|None): A name for this layer(optional). If set None,
the layer will be named automatically.
the layer will be named automatically.
Returns:
Returns:
Tensor: The tensor with same data type with x. it represents
Tensor: The tensor with same data type with x. it represents
pseudo inverse of x. Its shape should be (*, n, m).
pseudo inverse of x. Its shape should be (*, n, m).
Examples:
Examples:
.. code-block:: python
.. code-block:: python
...
@@ -1998,8 +1998,8 @@ def pinv(x, rcond=1e-15, hermitian=False, name=None):
...
@@ -1998,8 +1998,8 @@ def pinv(x, rcond=1e-15, hermitian=False, name=None):
helper
=
LayerHelper
(
'pinv'
,
**
locals
())
helper
=
LayerHelper
(
'pinv'
,
**
locals
())
dtype
=
x
.
dtype
dtype
=
x
.
dtype
check_variable_and_dtype
(
check_variable_and_dtype
(
x
,
'dtype'
,
[
'float32'
,
'float64'
,
'complex64'
,
'complex128'
],
x
,
'dtype'
,
[
'float32'
,
'float64'
,
'complex64'
,
'pinv'
)
'complex128'
],
'pinv'
)
if
dtype
==
paddle
.
complex128
:
if
dtype
==
paddle
.
complex128
:
s_type
=
'float64'
s_type
=
'float64'
...
@@ -2079,40 +2079,40 @@ def solve(x, y, name=None):
...
@@ -2079,40 +2079,40 @@ def solve(x, y, name=None):
Computes the solution of a square system of linear equations with a unique solution for input 'X' and 'Y'.
Computes the solution of a square system of linear equations with a unique solution for input 'X' and 'Y'.
Let :math: `X` be a sqaure matrix or a batch of square matrices, :math:`Y` be
Let :math: `X` be a sqaure matrix or a batch of square matrices, :math:`Y` be
a vector/matrix or a batch of vectors/matrices, the equation should be:
a vector/matrix or a batch of vectors/matrices, the equation should be:
.. math::
.. math::
Out = X^-1 * Y
Out = X^-1 * Y
Specifically,
Specifically,
- This system of linear equations has one solution if and only if input 'X' is invertible.
- This system of linear equations has one solution if and only if input 'X' is invertible.
Args:
Args:
x (Tensor): A square matrix or a batch of square matrices. Its shape should be `[*, M, M]`, where `*` is zero or
x (Tensor): A square matrix or a batch of square matrices. Its shape should be `[*, M, M]`, where `*` is zero or
more batch dimensions. Its data type should be float32 or float64.
more batch dimensions. Its data type should be float32 or float64.
y (Tensor): A vector/matrix or a batch of vectors/matrices. Its shape should be `[*, M, K]`, where `*` is zero or
y (Tensor): A vector/matrix or a batch of vectors/matrices. Its shape should be `[*, M, K]`, where `*` is zero or
more batch dimensions. Its data type should be float32 or float64.
more batch dimensions. Its data type should be float32 or float64.
name(str, optional): Name for the operation (optional, default is None).
name(str, optional): Name for the operation (optional, default is None).
For more information, please refer to :ref:`api_guide_Name`.
For more information, please refer to :ref:`api_guide_Name`.
Returns:
Returns:
Tensor: The solution of a square system of linear equations with a unique solution for input 'x' and 'y'.
Tensor: The solution of a square system of linear equations with a unique solution for input 'x' and 'y'.
Its data type should be the same as that of `x`.
Its data type should be the same as that of `x`.
Examples:
Examples:
.. code-block:: python
.. code-block:: python
# a square system of linear equations:
# a square system of linear equations:
# 2*X0 + X1 = 9
# 2*X0 + X1 = 9
# X0 + 2*X1 = 8
# X0 + 2*X1 = 8
import paddle
import paddle
import numpy as np
import numpy as np
np_x = np.array([[3, 1],[1, 2]])
np_x = np.array([[3, 1],[1, 2]])
np_y = np.array([9, 8])
np_y = np.array([9, 8])
x = paddle.to_tensor(np_x, dtype="float64")
x = paddle.to_tensor(np_x, dtype="float64")
y = paddle.to_tensor(np_y, dtype="float64")
y = paddle.to_tensor(np_y, dtype="float64")
out = paddle.linalg.solve(x, y)
out = paddle.linalg.solve(x, y)
print(out)
print(out)
# [2., 3.])
# [2., 3.])
"""
"""
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录