Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MindSpore
docs
提交
b569e043
D
docs
项目概览
MindSpore
/
docs
通知
5
Star
3
Fork
2
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
docs
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
b569e043
编写于
6月 08, 2020
作者:
T
Ting Wang
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
add api mapping
Signed-off-by:
N
Ting Wang
<
kathy.wangting@huawei.com
>
上级
be28eb74
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
160 addition
and
0 deletion
+160
-0
resource/api_mapping.md
resource/api_mapping.md
+160
-0
未找到文件。
resource/api_mapping.md
0 → 100644
浏览文件 @
b569e043
# API Mapping
Mapping between PyTorch APIs and MindSpore APIs, which is provided by the community.
| PyTorch APIs | MindSpore APIs |
|------------------------------------------------------|------------------------------------------------------------------------|
| torch.abs | mindspore.ops.operations.Abs |
| torch.acos | mindspore.ops.operations.ACos |
| torch.add | mindspore.ops.operations.TensorAdd |
| torch.argmax | mindspore.ops.operations.Argmax |
| torch.argmin | mindspore.ops.operations.Argmin |
| torch.atan2 | mindspore.ops.operations.Atan2 |
| torch.bmm | mindspore.ops.operations.BatchMatMul |
| torch.cat | mindspore.ops.operations.Concat |
| torch.chunk | mindspore.ops.operations.Split |
| torch.clamp | mindspore.ops.composite.clip_by_value |
| torch.cos | mindspore.ops.operations.Cos |
| torch.cuda.device_count | mindspore.communication.get_group_size |
| torch.cuda.set_device | mindspore.context.set_context |
| torch.cumprod | mindspore.ops.operations.CumProd |
| torch.cumsum | mindspore.ops.operations.CumSum |
| torch.distributed.all_gather | mindspore.ops.operations.AllGather |
| torch.distributed.all_reduce | mindspore.ops.operations.AllReduce |
| torch.distributed.get_rank | mindspore.communication.get_rank |
| torch.distributed.init_process_group | mindspore.communication.init |
| torch.div | mindspore.ops.operations.Div |
| torch.eq | mindspore.ops.operations.Equal |
| torch.erfc | mindspore.ops.operations.Erfc |
| torch.exp | mindspore.ops.operations.Exp |
| torch.eye | mindspore.ops.operations.Eye |
| torch.flatten | mindspore.ops.operations.Flatten |
| torch.floor | mindspore.ops.operations.Floor |
| torch.load | mindspore.train.serialization.load_checkpoint |
| torch.log | mindspore.ops.operations.Log |
| torch.log1p | mindspore.ops.operations.Log1p |
| torch.matmul | mindspore.ops.operations.MatMul |
| torch.max | mindspore.ops.operations.Maximum |
| torch.mean | mindspore.ops.operations.ReduceMean |
| torch.min | mindspore.ops.operations.Minimum |
| torch.mm | mindspore.ops.operations.MatMul |
| torch.mul | mindspore.ops.operations.Mul |
| torch.nn.AdaptiveAvgPool2d | mindspore.ops.operations.ReduceMean |
| torch.nn.AvgPool1d | mindspore.nn.AvgPool1d |
| torch.nn.AvgPool2d | mindspore.nn.AvgPool2d |
| torch.nn.BatchNorm1d | mindspore.nn.BatchNorm1d |
| torch.nn.BatchNorm2d | mindspore.nn.BatchNorm2d |
| torch.nn.Conv2d | mindspore.nn.Conv2d |
| torch.nn.ConvTranspose2d | mindspore.nn.Conv2dTranspose |
| torch.nn.CrossEntropyLoss | mindspore.nn.SoftmaxCrossEntropyWithLogits |
| torch.nn.CTCLoss | mindspore.ops.operations.CTCLoss |
| torch.nn.Dropout | mindspore.nn.Dropout |
| torch.nn.Embedding | mindspore.nn.Embedding |
| torch.nn.Flatten | mindspore.nn.Flatten |
| torch.nn.functional.adaptive_avg_pool2d | mindspore.nn.AvgPool2d |
| torch.nn.functional.avg_pool2d | mindspore.ops.operations.AvgPool |
| torch.nn.functional.binary_cross_entropy | mindspore.ops.operations.BinaryCrossEntropy |
| torch.nn.functional.conv2d | mindspore.ops.operations.Conv2D |
| torch.nn.functional.elu | mindspore.ops.operations.Elu |
| torch.nn.functional.interpolate | mindspore.ops.operations.ResizeBilinear |
| torch.nn.functional.log_softmax | mindspore.nn.LogSoftmax |
| torch.nn.functional.normalize | mindspore.ops.operations.L2Normalize |
| torch.nn.functional.one_hot | mindspore.ops.operations.OneHot |
| torch.nn.functional.pad | mindspore.ops.operations.Pad |
| torch.nn.functional.pixel_shuffle | mindspore.ops.operations.DepthToSpace |
| torch.nn.functional.relu | mindspore.ops.operations.ReLU |
| torch.nn.functional.softmax | mindspore.ops.operations.Softmax |
| torch.nn.functional.softplus | mindspore.ops.operations.Softplus |
| torch.nn.GroupNorm | mindspore.nn.GroupNorm |
| torch.nn.init.constant_ | mindspore.common.initializer.Constant |
| torch.nn.init.uniform_ | mindspore.common.initializer.Uniform |
| torch.nn.L1Loss | mindspore.nn.L1Loss |
| torch.nn.LayerNorm | mindspore.nn.LayerNorm |
| torch.nn.LeakyReLU | mindspore.nn.LeakyReLU |
| torch.nn.Linear | mindspore.nn.Dense |
| torch.nn.LSTM | mindspore.nn.LSTM |
| torch.nn.LSTMCell | mindspore.nn.LSTMCell |
| torch.nn.MaxPool2d | mindspore.nn.MaxPool2d |
| torch.nn.Module | mindspore.nn.Cell |
| torch.nn.Module.load_state_dict | mindspore.train.serialization.load_param_into_net |
| torch.nn.ModuleList | mindspore.nn.CellList |
| torch.nn.MSELoss | mindspore.nn.MSELoss |
| torch.nn.Parameter | mindspore.Parameter |
| torch.nn.ParameterList | mindspore.ParameterTuple |
| torch.nn.PixelShuffle | mindspore.ops.operations.DepthToSpace |
| torch.nn.PReLU | mindspore.nn.PReLU |
| torch.nn.ReLU | mindspore.nn.ReLU |
| torch.nn.ReplicationPad2d | mindspore.nn.Pad |
| torch.nn.Sequential | mindspore.nn.SequentialCell |
| torch.nn.Sigmoid | mindspore.nn.Sigmoid |
| torch.nn.SmoothL1Loss | mindspore.nn.SmoothL1Loss |
| torch.nn.Softmax | mindspore.nn.Softmax |
| torch.nn.Tanh | mindspore.nn.Tanh |
| torch.nn.Unfold | mindspore.nn.Unfold |
| torch.nn.Upsample | mindspore.ops.operations.ResizeBilinear |
| torch.norm | mindspore.nn.Norm |
| torch.numel | mindspore.ops.operations.Size |
| torch.ones | mindspore.ops.operations.OnesLike |
| torch.ones_like | mindspore.ops.operations.OnesLike |
| torch.optim.Adam | mindspore.nn.Adam |
| torch.optim.AdamW | mindspore.nn.AdamWeightDecay |
| torch.optim.lr_scheduler.CosineAnnealingWarmRestarts | mindspore.nn.dynamic_lr.cosine_decay_lr |
| torch.optim.lr_scheduler.StepLR | mindspore.nn.dynamic_lr.piecewise_constant_lr |
| torch.optim.Optimizer.step | mindspore.nn.TrainOneStepCell |
| torch.optim.RMSprop | mindspore.nn.RMSProp |
| torch.optim.SGD | mindspore.nn.SGD |
| torch.pow | mindspore.ops.operations.Pow |
| torch.prod | mindspore.ops.operations.ReduceProd |
| torch.randn | mindspore.ops.operations.TruncatedNormal |
| torch.round | mindspore.ops.operations.Round |
| torch.save | mindspore.train.serialization.save_checkpoint |
| torch.sigmoid | mindspore.ops.operations.Sigmoid |
| torch.sin | mindspore.ops.operations.Sin |
| torch.sparse.FloatTensor | mindspore.Tensor |
| torch.split | mindspore.ops.operations.Split |
| torch.sqrt | mindspore.ops.operations.Sqrt |
| torch.squeeze | mindspore.ops.operations.Squeeze |
| torch.stack | mindspore.ops.operations.Pack |
| torch.std_mean | mindspore.ops.operations.ReduceMean |
| torch.sum | mindspore.ops.operations.ReduceSum |
| torch.tanh | mindspore.ops.operations.Tanh |
| torch.tensor | mindspore.Tensor |
| torch.Tensor | mindspore.Tensor |
| torch.Tensor.chunk | mindspore.ops.operations.Split |
| torch.Tensor.fill_ | mindspore.ops.operations.Fill |
| torch.Tensor.float | mindspore.ops.operations.Cast |
| torch.Tensor.mm | mindspore.ops.operations.MatMul |
| torch.Tensor.mul | mindspore.ops.operations.Mul |
| torch.Tensor.pow | mindspore.ops.operations.Pow |
| torch.Tensor.repeat | mindspore.ops.operations.Tile |
| torch.Tensor.requires_grad_ | mindspore.Parameter.requires_grad |
| torch.Tensor.round | mindspore.ops.operations.Round |
| torch.Tensor.scatter | mindspore.ops.operations.ScatterNd |
| torch.Tensor.sigmoid | mindspore.nn.Sigmoid |
| torch.Tensor.sign | mindspore.ops.operations.Sign |
| torch.Tensor.size | mindspore.ops.operations.Shape |
| torch.Tensor.sqrt | mindspore.ops.operations.Sqrt |
| torch.Tensor.sub | mindspore.ops.operations.Sub |
| torch.Tensor.t | mindspore.ops.operations.Transpose |
| torch.Tensor.transpose | mindspore.ops.operations.Transpose |
| torch.Tensor.unsqueeze | mindspore.ops.operations.ExpandDims |
| torch.Tensor.view | mindspore.ops.operations.Reshape |
| torch.Tensor.zero_ | mindspore.ops.operations.ZerosLike |
| torch.transpose | mindspore.ops.operations.Transpose |
| torch.unbind | mindspore.ops.operations.Unpack |
| torch.unsqueeze | mindspore.ops.operations.ExpandDims |
| torch.utils.data.DataLoader | mindspore.DatasetHelper |
| torch.utils.data.Dataset | mindspore.dataset.MindDataset |
| torch.utils.data.distributed.DistributedSampler | mindspore.dataset.DistributedSampler |
| torch.zeros | mindspore.ops.operations.ZerosLike |
| torch.zeros_like | mindspore.ops.operations.ZerosLike |
| torchvision.datasets.ImageFolder | mindspore.dataset.ImageFolderDatasetV2 |
| torchvision.ops.nms | mindspore.ops.operations.NMSWithMask |
| torchvision.ops.roi_align | mindspore.ops.operations.ROIAlign |
| torchvision.transforms.CenterCrop | mindspore.dataset.transforms.vision.py_transforms.CenterCrop |
| torchvision.transforms.ColorJitter | mindspore.dataset.transforms.vision.py_transforms.RandomColorAdjust |
| torchvision.transforms.Compose | mindspore.dataset.transforms.vision.py_transforms.ComposeOp |
| torchvision.transforms.Normalize | mindspore.dataset.transforms.vision.py_transforms.Normalize |
| torchvision.transforms.RandomHorizontalFlip | mindspore.dataset.transforms.vision.py_transforms.RandomHorizontalFlip |
| torchvision.transforms.Resize | mindspore.dataset.transforms.vision.py_transforms.Resize |
| torchvision.transforms.ToTensor | mindspore.dataset.transforms.vision.py_transforms.ToTensor |
\ No newline at end of file
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录