Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
flybirding10011
DI-treetensor
提交
7f6bb05a
D
DI-treetensor
项目概览
flybirding10011
/
DI-treetensor
与 Fork 源项目一致
Fork自
OpenDILab开源决策智能平台 / DI-treetensor
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
DI-treetensor
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
7f6bb05a
编写于
9月 30, 2021
作者:
HansBug
😆
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
doc(hansbug): update README.md
上级
7757783f
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
90 addition
and
36 deletion
+90
-36
README.md
README.md
+90
-36
未找到文件。
README.md
浏览文件 @
7f6bb05a
...
...
@@ -44,61 +44,115 @@ Only english version is provided now, the chinese documentation is still under d
You can easily create a tree value object based on
`FastTreeValue`
.
```
python
import
builtins
import
os
from
functools
import
partial
import
treetensor.torch
as
torch
print
=
partial
(
builtins
.
print
,
sep
=
os
.
linesep
)
if
__name__
==
'__main__'
:
# create a tree tensor
t
=
torch
.
randn
({
'a'
:
(
2
,
3
),
'b'
:
{
'x'
:
(
3
,
4
)}})
print
(
t
)
# tree based tensors
# some calculations
print
(
't ** 2:'
,
t
**
2
,
sep
=
os
.
linesep
)
print
(
'torch.sin(t).cos()'
,
torch
.
sin
(
t
).
cos
(),
sep
=
os
.
linesep
)
print
(
t
)
print
(
torch
.
randn
(
4
,
5
))
# create a normal tensor
print
()
# structure of tree
print
(
'Structure of tree'
)
print
(
't.a:'
,
t
.
a
)
# t.a is a native tensor
print
(
't.b:'
,
t
.
b
)
# t.b is a tree tensor
print
(
't.b.x'
,
t
.
b
.
x
)
# t.b.x is a native tensor
print
()
# math calculations
print
(
'Math calculation'
)
print
(
't ** 2:'
,
t
**
2
)
print
(
'torch.sin(t).cos()'
,
torch
.
sin
(
t
).
cos
())
print
()
# backward calculation
print
(
'Backward calculation'
)
t
.
requires_grad_
(
True
)
t
.
std
().
arctan
().
backward
()
print
(
'grad of t:'
,
t
.
grad
,
sep
=
os
.
linesep
)
# backward
print
(
'grad of t:'
,
t
.
grad
)
print
()
# native operation
# all the ops can be used as the original usage of `torch`
print
(
'Native operation'
)
print
(
'torch.sin(t.a)'
,
torch
.
sin
(
t
.
a
))
# sin of native tensor
```
The result should be
```
text
<Tensor 0x7fcb33e922b0>
├── a --> tensor([[-0.1105, -1.0873, -1.8016],
│ [-1.2290, 0.1401, -2.5237]])
└── b --> <Tensor 0x7fcb33e92370>
└── x --> tensor([[ 0.1579, 0.9740, 0.3076, 0.2183],
[ 0.5574, 0.4028, -2.2795, 1.5523],
[-0.3870, -1.1649, 0.0314, -0.2728]])
<Tensor 0x7f0dae602760>
├── a --> tensor([[-1.2672, -1.5817, -0.3141],
│ [ 1.8107, -0.1023, 0.0940]])
└── b --> <Tensor 0x7f0dae602820>
└── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
[ 1.5956, 0.8825, -0.5702, -0.2247],
[ 0.9235, 0.4538, 0.8775, -0.2642]])
tensor([[-0.9559, 0.7684, 0.2682, -0.6419, 0.8637],
[ 0.9526, 0.2927, -0.0591, 1.2804, -0.2455],
[ 0.4699, -0.9998, 0.6324, -0.6885, 1.1488],
[ 0.8920, 0.4401, -0.7785, 0.5931, 0.0435]])
Structure of tree
t.a:
tensor([[-1.2672, -1.5817, -0.3141],
[ 1.8107, -0.1023, 0.0940]])
t.b:
<Tensor 0x7f0dae602820>
└── x --> tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
[ 1.5956, 0.8825, -0.5702, -0.2247],
[ 0.9235, 0.4538, 0.8775, -0.2642]])
t.b.x
tensor([[ 1.2224, -0.3445, -0.9980, -0.4085],
[ 1.5956, 0.8825, -0.5702, -0.2247],
[ 0.9235, 0.4538, 0.8775, -0.2642]])
Math calculation
t ** 2:
<Tensor 0x7f
cb33e9273
0>
├── a --> tensor([[
0.0122, 1.1822, 3.245
6],
│ [
1.5105, 0.0196, 6.3691
]])
└── b --> <Tensor 0x7f
cb33e9261
0>
└── x --> tensor([[
2.4920e-02, 9.4863e-01, 9.4633e-02, 4.7653e-02
],
[
3.1067e-01, 1.6226e-01, 5.1961e+00, 2.4097e+00
],
[
1.4976e-01, 1.3571e+00, 9.8400e-04, 7.4414e-02
]])
<Tensor 0x7f
0dae602eb
0>
├── a --> tensor([[
1.6057, 2.5018, 0.098
6],
│ [
3.2786, 0.0105, 0.0088
]])
└── b --> <Tensor 0x7f
0dae60c04
0>
└── x --> tensor([[
1.4943, 0.1187, 0.9960, 0.1669
],
[
2.5458, 0.7789, 0.3252, 0.0505
],
[
0.8528, 0.2059, 0.7699, 0.0698
]])
torch.sin(t).cos()
<Tensor 0x7f
cb33ead1c
0>
├── a --> tensor([[0.
9939, 0.6330, 0.5624
],
│ [0.5
880, 0.9903, 0.8368
]])
└── b --> <Tensor 0x7f
cb33ead04
0>
└── x --> tensor([[0.
9877, 0.6770, 0.9545, 0.9766
],
[0.
8633, 0.9241, 0.7254, 0.5404
],
[0.
9296, 0.6068, 0.9995, 0.9639
]])
<Tensor 0x7f
0dae62191
0>
├── a --> tensor([[0.
5782, 0.5404, 0.9527
],
│ [0.5
642, 0.9948, 0.9956
]])
└── b --> <Tensor 0x7f
0dae6216a
0>
└── x --> tensor([[0.
5898, 0.9435, 0.6672, 0.9221
],
[0.
5406, 0.7163, 0.8578, 0.9753
],
[0.
6983, 0.9054, 0.7185, 0.9661
]])
Backward calculation
grad of t:
<Tensor 0x7fcb33f08d60>
├── a --> tensor([[ 0.0060, -0.0174, -0.0345],
│ [-0.0208, 0.0120, -0.0518]])
└── b --> <Tensor 0x7fcb33f08dc0>
└── x --> tensor([[ 0.0125, 0.0320, 0.0160, 0.0139],
[ 0.0220, 0.0183, -0.0460, 0.0459],
[-0.0006, -0.0192, 0.0094, 0.0021]])
<Tensor 0x7f0dae60c400>
├── a --> tensor([[-0.0435, -0.0535, -0.0131],
│ [ 0.0545, -0.0064, -0.0002]])
└── b --> <Tensor 0x7f0dae60cbe0>
└── x --> tensor([[ 0.0357, -0.0141, -0.0349, -0.0162],
[ 0.0476, 0.0249, -0.0213, -0.0103],
[ 0.0262, 0.0113, 0.0248, -0.0116]])
Native operation
torch.sin(t.a)
tensor([[-0.9543, -0.9999, -0.3089],
[ 0.9714, -0.1021, 0.0939]], grad_fn=<SinBackward>)
```
For more quick start explanation and further usage, take a look at:
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录