With `align_corners = True`, the linearly interpolating modes (`linear`, `bilinear`, and `trilinear`) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the 默认 behavior for these modes up to version 0.3.1\. Since then, the 默认 behavior is `align_corners = False`. 请参见 [`Upsample`](#torch.nn.Upsample"torch.nn.Upsample") for concrete examples on how this affects the outputs.
This API is currently experimental and may change in the near future.
警告
Torch supports sparse tensors in COO(rdinate) format, which can efficiently store and process tensors for which the majority of elements are zeros.
这个API目前还处于试验阶段,可能在不久的将来会发生变化。
A sparse tensor is represented as a pair of dense tensors: a tensor of values and a 2D tensor of indices. A sparse tensor can be constructed by providing these two tensors, as well as the size of the sparse tensor (which cannot be inferred from these tensors!) Suppose we want to define a sparse tensor with the entry 3 at location (0, 2), entry 4 at location (1, 0), and entry 5 at location (1, 2). We would then write:
@@ -21,7 +23,7 @@ A sparse tensor is represented as a pair of dense tensors: a tensor of values an
```
Note that the input to LongTensor is NOT a list of index tuples. If you want to write your indices this way, you should transpose before passing them to the sparse constructor:
@@ -69,17 +71,19 @@ SparseTensor has the following invariants:
Since SparseTensor._indices() is always a 2D tensor, the smallest sparse_dim = 1. Therefore, representation of a SparseTensor of sparse_dim = 0 is simply a dense tensor.
Our sparse tensor format permits _uncoalesced_ sparse tensors, where there may be duplicate coordinates in the indices; in this case, the interpretation is that the value at that index is the sum of all duplicate value entries. Uncoalesced tensors permit us to implement certain operators more efficiently.
For the most part, you shouldn’t have to care whether or not a sparse tensor is coalesced or not, as most operations will work identically given a coalesced or uncoalesced sparse tensor. However, there are two cases in which you may need to care.
First, if you repeatedly perform an operation that can produce duplicate entries (e.g., [`torch.sparse.FloatTensor.add()`](#torch.sparse.FloatTensor.add"torch.sparse.FloatTensor.add")), you should occasionally coalesce your sparse tensors to prevent them from growing too large.
Second, some operators will produce different values depending on whether or not they are coalesced or not (e.g., [`torch.sparse.FloatTensor._values()`](#torch.sparse.FloatTensor._values"torch.sparse.FloatTensor._values") and [`torch.sparse.FloatTensor._indices()`](#torch.sparse.FloatTensor._indices"torch.sparse.FloatTensor._indices"), as well as [`torch.Tensor.sparse_mask()`](tensors.html#torch.Tensor.sparse_mask"torch.Tensor.sparse_mask")). These operators are prefixed by an underscore to indicate that they reveal internal implementation details and should be used with care, since code that works with coalesced sparse tensors may not work with uncoalesced sparse tensors; generally speaking, it is safest to explicitly coalesce before working with these operators.
第二, 一些运算符将根据它们是否coalesced产生不同的值 (例如, [`torch.sparse.FloatTensor._values()`](#torch.sparse.FloatTensor._values"torch.sparse.FloatTensor._values") and [`torch.sparse.FloatTensor._indices()`](#torch.sparse.FloatTensor._indices"torch.sparse.FloatTensor._indices"), 以及 [`torch.Tensor.sparse_mask()`](tensors.html#torch.Tensor.sparse_mask"torch.Tensor.sparse_mask")). 这些操作符以下划线作为前缀,表示它们揭示了内部实现细节,应该小心使用,因为使用合并稀疏张量的代码可能无法使用未合并稀疏张量;一般来说,在使用这些操作符之前显式地合并是最安全的。
For example, suppose that we wanted to implement an operator by operating directly on [`torch.sparse.FloatTensor._values()`](#torch.sparse.FloatTensor._values"torch.sparse.FloatTensor._values"). Multiplication by a scalar can be implemented in the obvious way, as multiplication distributes over addition; however, square root cannot be implemented directly, since `sqrt(a + b) != sqrt(a) + sqrt(b)` (which is what would be computed if you were given an uncoalesced tensor.)
例如,假设我们想通过直接操作[`torch.sparse.FloatTensor._values()`](#torch.sparse.FloatTensor._values"torch.sparse.FloatTensor._values").来实现一个操作符.标量乘法可以用很明显的方法实现,因为乘法分布于加法之上;但是,平方根不能直接实现,因为`sqrt(a + b) != sqrt(a) + sqrt(b)`(如果给定一个uncoalesced的张量,就会计算出这个结果)。
```py
classtorch.sparse.FloatTensor
...
...
@@ -205,21 +209,21 @@ _values()
_nnz()
```
## Functions
## 函数
```py
torch.sparse.addmm(mat,mat1,mat2,beta=1,alpha=1)
```
This function does exact same thing as [`torch.addmm()`](torch.html#torch.addmm"torch.addmm") in the forward, except that it supports backward for sparse matrix `mat1`. `mat1` need to have `sparse_dim = 2`. Note that the gradients of `mat1` is a coalesced sparse tensor.
Performs a matrix multiplication of the sparse matrix `mat1` and dense matrix `mat2`. Similar to [`torch.mm()`](torch.html#torch.mm"torch.mm"), If `mat1` is a ![](img/b2d82f601df5521e215e30962b942ad1.jpg) tensor, `mat2` is a ![](img/ec84c2d649caa2a7d4dc59b6b23b0278.jpg) tensor, out will be a ![](img/42cdcd96fd628658ac0e3e7070ba08d5.jpg) dense tensor. `mat1` need to have `sparse_dim = 2`. This function also supports backward for both matrices. Note that the gradients of `mat1` is a coalesced sparse tensor.
Returns the sum of each row of SparseTensor `input` in the given dimensions `dim`. If :attr::`dim` is a list of dimensions, reduce over all of them. When sum over all `sparse_dim`, this method returns a Tensor instead of SparseTensor.
All summed `dim` are squeezed (see [`torch.squeeze()`](torch.html#torch.squeeze"torch.squeeze")), resulting an output tensor having :attr::`dim` fewer dimensions than`input`.
所有被求和的 `dim` 将被 squeezed (see [`torch.squeeze()`](torch.html#torch.squeeze"torch.squeeze")),导致速出 tensor 的 :attr::`dim` 小于`input`.
During backward, only gradients at `nnz` locations of `input` will propagate back. Note that the gradients of `input` is coalesced.
***input** ([_Tensor_](tensors.html#torch.Tensor"torch.Tensor")) – the input SparseTensor
***dim** ([_int_](https://docs.python.org/3/library/functions.html#int"(in Python v3.7)")_or_ _tuple of python:ints_) – a dimension or a list of dimensions to reduce. Default: reduce over all dims.
***dtype** (`torch.dtype`, optional) – the desired data type of returned Tensor. Default: dtype of `input`.