未验证 提交 225d6757 编写于 作者: L liuTINA0907 提交者: GitHub

add (#5597)

* add new models and helix files (#5569)
Co-authored-by: Nliushuangqiao <liushuangqiao@beibeiMacBook-Pro.local>

* Add PP-HumanSegV2 Chinese Docs (#5568)

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* Create .gitkeep

* update PP-HelixFold description (#5571)

* add new models and helix files

* update PP-HelixFold description
Co-authored-by: Nliushuangqiao <liushuangqiao@beibeiMacBook-Pro.local>

* Update introduction_cn.ipynb

* Update introduction_en.ipynb

* Update introduction_en.ipynb

* Update introduction_cn.ipynb

* add PP-ASR (#5559)

* add PP-ASR

* add PP-ASR

* add PP-ASR

* PP-ASR (#5572)

* add PP-ASR

* add PP-ASR

* add PP-ASR

* add PP-ASR

* add PP-ASR

* Update info.yaml (#5575)

* add install of paddleclas (#5581)

* add gradio for PP-TTS (#5582)

* Delete modelcenter/VIMER-UMS directory

* [Seg] Add PP-LiteSeg (Doc & APP) and PP-HumanSeg (APP) (#5576)

* Add PP-LiteSeg to Model Center

* Add APP for PP-HumanSegV2 and PP-LiteSeg

* Add PP-Matting (#5563)

* update info.yam english (#5586)

* update inference of PP-OCRv2 to paddlecv (#5587)

* add install of paddleclas

* update inference with paddlecv

* Update PP-HumanSegV2 info.yaml

* Change sub_tag_en in PP-HumanSeg info.yaml

* Change sub_tag_en in PP-LiteSeg info.yml

* Main (#5588)

* add install of paddleclas

* update inference with paddlecv

* change cv to Computer Vision

* Add ERNIE UIE in modelcenter (#5574)

* add_uie

* fill_yaml

* modify_info

* add_model_size

* imgae_size

* image_size

* modify

* change_en

* change_en

* Update info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>

* add PP-TSM (#5565)

* add PP-TSM

* remove log file

* add description

* fix Example in info.yaml

* fix notebook

* update task to video

* add ERNIE-M to model center (#5570)

* add ERNIE-M to model center

* add benchmark_en info.yaml

* remove unused cell

* rm image from repo and fix ipynb

* add introduction_en and fix some annotations

* add task to info.yaml

* add model_size and the number of parameters while modify ipynb following comments

* rewrite pretrained model into chinese

* modify task info

* Add det models (#5578)

* add PP-YOLO/PP-YOLOE/PP-YOLOE+/PP-Picodet models

* add PP-YOLO/PP-YOLOE/PP-YOLOE+/PP_PicoDet

* modify info.yml of det models

* Update info.yaml

* Update info.yaml

* Update info.yaml

* Update info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>

* update PP-ShiTuV2 files (#5562)

* update PP-ShiTuV2 files

* update PP-ShiTu

* add PP-ShiTuV2 APP files

* refine code

* remove PP-ShiTuV1' APP

* correct  in info.yaml

* update partial en docs

* update

* Update info.yaml

* Update info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>

* add PP-TInypose PP-Vehicle PP-HuamnV2 (#5573)

* add PP-TInyPose

* add PP-TInyPose info.yaml app.yaml

* * add PP-HumanV2

* add PP-Vehicle

* PP-TInyPose PP-Vehicle PP-HumanV2 add English version

* fixed info.yaml

* Update info.yaml

* fix PP-TInyPose/info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>

* Update info.yaml

* add ppmsvsr introduction_cn.ipynb (#5554)

* add ppmsvsr introduction_cn.ipynb

* add benchmark info and download

* add en doc

* fix info.yaml

* Update info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>

* Update info.yaml

* Update info.yml

* Update info.yaml

* Update info.yaml

* Delete modelcenter/ERNIE-Gram directory

* Delete modelcenter/ERNIE-Doc directory

* Delete modelcenter/ERNIE-ViL directory

* Delete modelcenter/PP-HumanSeg directory

* Add ERNIE-3.0 for model center (#5577)

* add ernie-3.0

* update ERNIE 3.0

* Delete modelcenter/PP-OCR directory

* Delete modelcenter/PP-Structure directory

* Update info.yaml

* Delete modelcenter/PP-TSMv2 directory

* Delete modelcenter/VIMER-StrucTexT V1 directory

* Update info.yaml

* update PP-ShiTu en docs (#5589)

* update PP-ShiTuV2 files

* update PP-ShiTu

* add PP-ShiTuV2 APP files

* refine code

* remove PP-ShiTuV1' APP

* correct  in info.yaml

* update partial en docs

* update

* update PP-ShiTu en doc

* refine

* add pplcnet pplcnetv2 pphgnet (#5560)

* add pplcnet

* add pplcnetv2

* add pphgnet

* fix

* update

* support gradio for PPLCNet PPLCNetV2 PPHGNet

* fix

* enrich contents

* Update benchmark_cn.md

* Update download_cn.md

* Update info.yaml

* Update info.yaml

* add example

* Add modelcenter en

* Add PP-HGNet en

* Add PP-LCNet and PP-LCNetV2 en

* Add modelcenter English

* rename v2 to V2

* fix

* Update info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>
Co-authored-by: weixin_46524038's avatarzhangyubo0722 <zhangyubo0722@163.com>

* Update info.yaml

* Update info.yaml

* Update info.yaml

* update download.md (#5591)

* bb

* 2022.11.14

* 2022.11.14_10:36

* 2022.11.14 16:55

* 2022.11.14 17:03

* 2022.11.14 21:40

* 2022.11.14 21:50

* update info.yaml

* 2022.11.17 16:44

* update info.yml of PP-OCRv2、PP-OCRv3、PP-StructureV2 (#5593)

* add install of paddleclas

* update inference with paddlecv

* change cv to Computer Vision

* update info.yml

* Update info.yaml

* Update info.yaml

* Update info.yaml

* Update info.yaml
Co-authored-by: NliuTINA0907 <65896652+liuTINA0907@users.noreply.github.com>

* Update info.yaml

* add files

* remove ernie-3.0 & ... models

* add ernie & ... models
Co-authored-by: Nliushuangqiao <liushuangqiao@beibeiMacBook-Pro.local>
Co-authored-by: Ncc <52520497+juncaipeng@users.noreply.github.com>
Co-authored-by: NZth9730 <32243340+Zth9730@users.noreply.github.com>
Co-authored-by: NMing <hww_ym@aliyun.com>
Co-authored-by: 文幕地方's avatarzhoujun <572459439@qq.com>
Co-authored-by: 小湉湉's avatarTianYuan <white-sky@qq.com>
Co-authored-by: Nwuyefeilin <30919197+wuyefeilin@users.noreply.github.com>
Co-authored-by: Nlugimzzz <63761690+lugimzzz@users.noreply.github.com>
Co-authored-by: Nhuangjun12 <2399845970@qq.com>
Co-authored-by: NYam <40912707+Yam0214@users.noreply.github.com>
Co-authored-by: Nwangxinxin08 <69842442+wangxinxin08@users.noreply.github.com>
Co-authored-by: NHydrogenSulfate <490868991@qq.com>
Co-authored-by: NLokeZhou <aishenghuoaiqq@163.com>
Co-authored-by: Nwangna11BD <79366697+wangna11BD@users.noreply.github.com>
Co-authored-by: NJiaqi Liu <709153940@qq.com>
Co-authored-by: NTingquan Gao <35441050@qq.com>
Co-authored-by: weixin_46524038's avatarzhangyubo0722 <zhangyubo0722@163.com>
Co-authored-by: N小小的香辛料 <little_spice@163.com>
上级 b352b78e
---
Model_Info:
name: "DeepCFD"
description: "基于DeepCFD模型实现流体绕过任意障碍物的定常流场仿真"
description_en: "Simulation of steady flow of fluid bypassing any obstacle based on DeepCFD model"
icon: "@后续UE统一设计之后,会存到bos上某个位置"
from_repo: ""
Task:
-
tag: "科学计算"
tag_en: "Scientific Computing"
sub_tag: "计算流体力学"
sub_tag_en: "Computational fluid dynamics"
Example:
-
tag: "工业/能源"
tag_en: "Industrial/Energy"
sub_tag: "计算流体力学"
sub_tag_en: "Computational fluid dynamics"
title: "基于PaddlePaddle的DeepCFD复现"
url: https://aistudio.baidu.com/aistudio/projectdetail/4400677?channelType=0&channel=0
url_en: https://aistudio.baidu.com/aistudio/projectdetail/4400677?channelType=0&channel=0
Datasets: "Data_DeepCFD"
Pulisher: "Baidu"
License: "apache.2.0"
IfTraining: 1
IfOnlineDemo: 1
{
"cells": [
{
"cell_type": "markdown",
"id": "08bceb91-45c4-48ad-aa62-4acb2b5a2817",
"metadata": {
"tags": []
},
"source": [
"\n",
"# 1. DeepCFD 模型简介\n",
"\n",
"计算流体力学(Computational fluid dynamics, CFD)通过求解Navier-Stokes方程(N-S方程),可以获得流体的各种物理量的分布,如密度、压力和速度等,在微电子系统、土木工程和航空航天等领域应用广泛。然而,在某些复杂的应用场景中,如机翼优化和流体与结构相互作用方面,需要使用千万级甚至上亿的网格对问题进行建模(如下图所示,下图展示了F-18战斗机的全机内外流一体结构化网格模型),导致CFD的计算量非常巨大。因此,目前亟需发展出一种相比于传统CFD方法更高效,且可以保持计算精度的方法。这篇文章的作者提到,可以使用深度学习的方法,通过训练少量传统CFD仿真的数据,构建一种数据驱动(data-driven)的CFD计算模型,来解决上述的问题。\n",
"\n",
"<img src=\"http://www.cannews.com.cn/files/Resource/attachement/2017/0511/1494489582596.jpg\" alt=\"img\" style=\"zoom:80%;\" />\n",
"\n",
"本模型是针对论文:[DeepCFD: Efficient steady-state laminar flow approximation with deep convolutional neural networks](https://arxiv.org/abs/2004.08826)[J]基于飞桨框架进行模型复现,文中作者提出了一个基于卷积神经网络(Convolutional neural networks, CNN)的CFD计算模型,称作DeepCFD,该模型可以同时计算流体流过任意障碍物的流场。该方法有以下几个特点:\n",
"\n",
"1. DeepCFD本质上是一种基于CNN的代理模型,可以用于快速计算二维非均匀稳态层流流动,相比于传统的CFD方法,该方法可以在保证计算精度的情况下达到至少三个数量级的加速。\n",
"\n",
"2. DeepCFD可以同时计算流体在x方向和y方向的流体速度,同时还能计算流体压强。\n",
"\n",
"3. 训练该模型的数据由OpenFOAM(一种开源CFD计算软件)计算得到。\n",
"\n",
"\n",
"# 2. 模型效果及应用场景\n",
"\n",
"\n",
"# 3. 模型如何使用\n",
"\n",
"## 3.1 环境安装\n",
"\n",
"* 硬件:GPU、CPU\n",
"* 框架:PaddlePaddle >= 2.0.0\n",
"## 3.2 数据集\n",
"\n",
"数据集使用原作者利用OpenFOAM计算的CFD算例,共981组,分为两个文件(dataX.pkl, dataY.pkl),两个文件大小都是152 MB,形状均为[981, 3, 172, 79]。dataX.pkl包括三种输入:障碍物的SDF、计算域边界的SDF和流动区域的标签;dataY.pkl包括三种输出:流体的x方向速度、y方向速度和流体压强。数据获取使用的计算网格为172×79。\n",
"\n",
"数据集地址:https://aistudio.baidu.com/aistudio/datasetdetail/162674\n",
"\n",
"或https://www.dropbox.com/s/kg0uxjnbhv390jv/Data_DeepCFD.7z?dl=0\n",
"\n",
"## 3.3 快速开始\n",
"\n",
"**step1:克隆本项目**\n",
"\n",
"搜索DeepCFD_with_PaddlePaddle,选择对应的版本,Fork。\n",
"\n",
"![fork.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/fork.png?raw=true)\n",
"\n",
"**step2:开始训练**\n",
"\n",
"选择进入终端。\n",
"\n",
"![click_terminal.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/click_terminal.png?raw=true)\n",
"\n",
"**单卡训练**\n",
"\n",
"```python\n",
"python -m paddle.distributed.launch --gpus=0 train.py\n",
"```\n",
"\n",
"**多卡训练**\n",
"\n",
"以四卡为例\n",
"\n",
"```python\n",
"python -m paddle.distributed.launch --gpus=0,1,2,3 train.py\n",
"```\n",
"\n",
"结果保存在result文件中(注:result文件夹中已经包含了一个完整的训练过程,可在训练前将其清空)。多卡训练会额外生成一个./log/文件夹,存放训练日志\n",
"\n",
"```python\n",
".\n",
"├── log\n",
"│ ├── workerlog.0\n",
"│ ├── workerlog.1\n",
"│ ├── workerlog.2\n",
"│ └── workerlog.3\n",
"└── train.py\n",
"```\n",
"\n",
"部分训练日志如下所示\n",
"\n",
"```python\n",
"Epoch #1\n",
" Train Loss = 884808909.0\n",
" Train Total MSE = 10197.3000353043\n",
" Train Ux MSE = 3405.3426083044824\n",
" Train Uy MSE = 4334.0962839376825\n",
" Train p MSE = 2457.8616943359375\n",
" Validation Loss = 53205074.5\n",
" Validation Total MSE = 1027.7523040254237\n",
" Validation Ux MSE = 419.7688029661017\n",
" Validation Uy MSE = 543.9674920550848\n",
" Validation p MSE = 64.01604872881356\n",
"Epoch #2\n",
" Train Loss = 75408434.25\n",
" Train Total MSE = 603.198411591199\n",
" Train Ux MSE = 277.9321616481414\n",
" Train Uy MSE = 303.4222437021684\n",
" Train p MSE = 21.843986488987337\n",
" Validation Loss = 17892356.5\n",
" Validation Total MSE = 312.7194186970339\n",
" Validation Ux MSE = 169.64230501853814\n",
" Validation Uy MSE = 140.46789757680085\n",
" Validation p MSE = 2.6092084981627384\n",
"```\n",
"\n",
"**step3:评估模型**\n",
"\n",
"```python\n",
"python eval.py\n",
"```\n",
"\n",
"此时的输出为:\n",
"\n",
"```python\n",
"Total MSE is 1.895322561264038, Ux MSE is 0.6951090097427368, Uy MSE is 0.21001490950584412, p MSE is 0.9901986718177795\n",
"```\n",
"\n",
"**step4:使用预训练模型预测**\n",
"\n",
"考虑到需要展示流场图像对比结果,单独写了一个predict.ipynb来进行模型的验证,需要在Jupyter notebook环境中运行。\n",
"\n",
"运行predict.ipynb,某个障碍物的流场预测结果展示如下:\n",
"\n",
"![paddle_contour.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/paddle_contour.png?raw=true)\n",
"\n",
"## 3.4 代码结构与参数说明\n",
"\n",
"### 3.4.1 代码结构\n",
"\n",
"```python\n",
"DeepCFD_with_PaddlePaddle\n",
"├─ config\n",
"│ └─ config.ini\n",
"├─ data\n",
"│ └─ README.md\n",
"├─ model\n",
"│ └─ UNetEx.py\n",
"├─ result\n",
"│ ├─ DeepCFD_965.pdparams\n",
"│ ├─ results.json\n",
"│ └─ train_log.txt\n",
"└─ utils\n",
" ├─ functions.py\n",
" └─ train_functions.py \n",
"├─ README.md\n",
"├─ README_cn.md\n",
"├─ eval.py\n",
"├─ train.py\n",
"├─ predict.ipynb\n",
"├─ requirements.txt\n",
"```\n",
"\n",
"### 3.4.2 参数说明\n",
"\n",
"可以在/DeepCFD_with_PaddlePaddle/config/config.ini中设置训练的参数,包括以下内容:\n",
"\n",
"| 参数 | 推荐值 | 额外说明 |\n",
"| ---------------- | -------------------- | --------------------------------------------- |\n",
"| batch_size | 64 | |\n",
"| train_test_ratio | 0.7 | 训练集占数据集的比例,0.7即训练集70%测试集30% |\n",
"| learning_rate | 0.001 | |\n",
"| weight_decay | 0.005 | AdamW专用,若修改优化算法需要修改train.py |\n",
"| epochs | 1000 | |\n",
"| kernel_size | 5 | 卷积核大小 |\n",
"| filters | 8, 16, 32, 32 | 卷积层channel数目 |\n",
"| batch_norm | 0 | 批量正则化,0为False,1为True |\n",
"| weight_norm | 0 | 权重正则化,0为False,1为True |\n",
"| data_path | ./data | 数据集路径,视具体情况设置 |\n",
"| save_path | ./result | 模型和训练记录的保存路径,视具体情况设置 |\n",
"| model_name | DeepCFD_965.pdparams | 具体加载的模型名称,后缀不能省略 |\n",
"\n",
"\n",
"## 3.5 预测结果\n",
"\n",
"下图展示了原文的预测结果,文中评价模型的优劣共包含四个指标:Total MSE、Ux MSE、Uy MSE、p MSE(MSE的意思是均方根误差)。\n",
"\n",
"![metrics.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/metrics.png?raw=true)\n",
"\n",
"下图展示了某种形状障碍物的CFD(注:simpleFOAM是OpenFOAM求解器的一种)和DeepCFD流场计算结果对比。\n",
"\n",
"![pytorch_contour.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/pytorch_contour.png?raw=true)\n",
"# 4. 模型原理\n",
"\n",
"下面两张图分别为该方法的计算示意图和网络结构图。文中使用的DeepCFD网络基本结构为有3个输入和3个输出的U-net型网络。该模型输入为计算域中障碍物的符号距离函数(Signed distance function, SDF)、计算域边界的SDF和流动区域的标签;输出为流体的x方向速度、y方向速度以及流体压强。该模型的基本原理就是利用编码器部分的卷积层将3个输入下采样,变为中间量,然后使用相同结构的解码器中的反卷积层将中间量上采样为3个流体物理量输出。\n",
"\n",
"![compute_process.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/compute_process.png?raw=true)\n",
"\n",
"![DeepCFD_Net.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/DeepCFD_Net.png?raw=true)\n",
"\n",
"\n",
"# 5. 注意事项\n",
"## 5.1 模型验收标准\n",
"\n",
"复现的验收标准如下:\n",
"\n",
"![standard.png](https://github.com/zbyandmoon/Picture/blob/main/picture_DeepCFD/standard.png?raw=true)\n",
"\n",
"## 5.2 指标实现情况\n",
"\n",
"复现的实现指标如下:\n",
"\n",
"```python\n",
"Total MSE = 1.8955801725387573\n",
"Ux MSE = 0.6953578591346741\n",
"Uy MSE = 0.21001338958740234\n",
"p MSE = 0.9902092218399048\n",
"```\n",
"\n",
"其中,Total MSE、Ux MSE和Uy MSE在验收标准范围内,p MSE略小于验收标准的最小值。\n",
"\n",
"## 5.3 复现地址\n",
"\n",
"论文复现地址:\n",
"\n",
"AI Studio: https://aistudio.baidu.com/aistudio/projectdetail/4400677?contributionType=1\n",
"\n",
"github: https://github.com/zbyandmoon/DeepCFD_with_PaddlePaddle/tree/main/paddle\n",
"\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
},
"toc-autonumbering": true
},
"nbformat": 4,
"nbformat_minor": 5
}
---
Model_Info:
name: "ERNIE 3.0 Zeus"
description: "文心任务知识增强大模型"
description_en: "Large model of task knowledge enhancement"
icon: "@后续UE统一设计之后,会存到bos上某个位置"
from_repo: ""
Task:
- tag_en: "Wenxin Big Model"
tag: "文心大模型"
sub_tag_en: "ERNIE-3.0 Zeus"
sub_tag: "任务知识增强"
Datasets: ""
Pulisher: "Baidu"
License: "apache.2.0"
IfTraining: 1
IfOnlineDemo: 1
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 1. ERNIE 3.0 Zeus 模型简介\n",
"\n",
"ERNIE 3.0 Zeus 是 ERNIE 3.0 系列模型的最新升级。其除了对无标注数据和知识图谱的学习之外,还通过持续学习对百余种不同形式的任务数据学习。实现了任务知识增强,显著提升了模型的零样本/小样本学习能力。\n",
"\n",
"## 2. 模型原理介绍\n",
"\n",
"ERNIE 3.0 Zeus 在学习过程中使用统一范式的多任务学习,建模数据中不同粒度的语义信息。为了进一步学习特定任务的相关知识,ERNIE 3.0 Zeus 提出了层次化提示(Prompt)学习技术。在数据构造时通过层次化的 Text Prompt 库将百余种不同的任务统一组织成自然语言的形式,和海量无监督文本以及百度知识图谱联合学习。此外训练过程引入了层次化的 Soft Prompt 建模了不同任务之间的共性与特性,进一步提升了模型对于不同下游任务的建模能力。\n",
"\n",
"![ERNIE 3.0 Zeus.png](https://bce.bdstatic.com/doc/ai-doc/wenxin/ERNIE%203.0%20Zeus_b3f228d.png)\n",
"\n",
"## 3. 模型快速使用\n",
" \n",
"#### 温馨提示\n",
"\n",
"每个账户每日免费请求ERNIE 3.0系列服务的上限为200条输入,免费请求额度共2000条输入。如果您有更多请求需求,请跟我们联系:[体验申请](https://wenxin.baidu.com/wenxin/apply3)\n",
"\n",
"\n",
"#### 获取API Key\n",
"\n",
"您可以登录大模型开放API获取您的专属获取您的专属 API Key(AK)和Secret Key(SK),点击[链接](https://wenxin.baidu.com/moduleApi/key)查看您的AK和SK。请注意保护您的密钥信息,避免泄露。您可以通过删除已泄露的密钥来保护您的账户安全。\n",
"\n",
"![image.png](https://bce.bdstatic.com/doc/AIDP/wenxin/image_3a02673.png)\n",
"\n",
"#### 步骤一:安装wenxin-api工具包"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"!pip install --upgrade wenxin-api"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"#### 步骤二: 在python环境中使用\n",
"(注意:建议使用python3.7及以上版本)\n",
"\n",
"您可以通过以下代码使用您的数据集进行模型精调(将上一步骤申请的AK、SK填入下方代码的your ak和your sk中)。\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# -*- coding: utf-8 -*\n",
"import wenxin_api # 可以通过\"pip install wenxin-api\"命令安装\n",
"from wenxin_api.tasks.free_qa import FreeQA\n",
"wenxin_api.ak = \"your ak\" #输入您的API Key\n",
"wenxin_api.sk = \"your sk\" #输入您的Secret Key\n",
"input_dict = {\n",
" \"text\": \"问题:交朋友的原则是什么?\\n回答:\",\n",
" \"seq_len\": 512,\n",
" \"topp\": 0.5,\n",
" \"penalty_score\": 1.2,\n",
" \"min_dec_len\": 2,\n",
" \"min_dec_penalty_text\": \"。?:![<S>]\",\n",
" \"is_unidirectional\": 0,\n",
" \"task_prompt\": \"qa\",\n",
" \"mask_type\": \"paragraph\"\n",
"}\n",
"rst = FreeQA.create(**input_dict)\n",
"print(rst)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"其中常见的状态码可以参考:[状态码汇总](https://wenxin.baidu.com/wenxin/docs#Ol6th102y)\n",
"\n",
"## 4. 模型效果\n",
"\n",
"在公开数据集上零样本/小样本学习的效果\n",
"\n",
"![效果 1.png](https://bce.bdstatic.com/doc/ai-doc/wenxin/%E6%95%88%E6%9E%9C%201_bd6c09d.png)\n",
"\n",
"在小样本学习理解任务上的效果\n",
"\n",
"![image (29).png](https://bce.bdstatic.com/doc/ai-doc/wenxin/image%20%2829%29_69558b6.png)\n",
"\n",
"在小样本学习生成任务上的效果\n",
"\n",
"![image (30).png](https://bce.bdstatic.com/doc/ai-doc/wenxin/image%20%2830%29_6a20d06.png)\n",
"\n",
"## 5. 应用场景\n",
"\n",
"智能创作、摘要生成、问答、语义检索、情感分析、信息抽取、文本匹配、文本纠错等各类自然语言理解和生成任务\n",
"\n",
"## 6. 使用方案\n",
"\n",
"ERNIE 3.0 Zeus 发布了业界首个开放的千亿参数的中文生成 API。供各行各业的开发者调用和开发,使用其强大的零样本和小样本学习能力。\n",
"\n",
"#### 通过飞桨旸谷社区在线体验\n",
"\n",
"通过飞桨旸谷社区在线体验 ERNIE 3.0 Zeus 的文本理解和文本创作能力,您可以通过 ERNIE 3.0 Zeus Prompt 接口体验预置 prompt 技能,预置技能包括作文创作、文案创作、摘要生成、问题生成、古诗创作、对联续写、小说续写、自由问答、情感分析、信息抽取、同义改写、文本匹配、文本纠错、完型填空、Text2SQL 等十余种预置技能,也可以自定义 prompt 体验 ERNIE 3.0 Zeus 强大的零样本、小样本自然语言处理能力。同样的,通过 ERNIE3.0 Zeus 接口您可以随意输入内容,体验模型强大的续写能力。\n",
"\n",
"#### 通过 API 调用体验\n",
"\n",
"ERNIE 3.0 Zeus 提供[ API 体验调用的入口](https://wenxin.baidu.com/ernie3),您也可以在飞桨旸谷社区 API 体验专区申请 AK、SK 进行接口调用体验。\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
---
Model_Info:
name: "ERNIE-ViLG"
description: "文心知识增强跨模态图文生成大模型"
description_en: "Large model of text centered knowledge enhanced cross modal text\
\ generation"
icon: "@后续UE统一设计之后,会存到bos上某个位置"
from_repo: ""
Task:
- tag_en: "Wenxin Big Model"
tag: "文心大模型"
sub_tag_en: "ERNIE-ViLG"
sub_tag: "图文生成"
Datasets: ""
Pulisher: "Baidu"
License: "apache.2.0"
IfTraining: 1
IfOnlineDemo: 1
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## 1. ERNIE-ViLG模型简介\n",
"\n",
"ERNIE-ViLG是一个知识增强跨模态图文生成大模型,将文生成图和图生成文任务融合到同一个模型进行端到端的学习,从而实现文本和图像的跨模态语义对齐。可以支持用户进行内容创作,让每个用户都能够体验到一个低门槛的创作平台。[点击此处进去体验页面](https://wenxin.baidu.com/moduleApi/ernieVilg)\n",
"\n",
"## 2. 模型原理介绍\n",
"百度文心ERNIE-ViLG 模型提出统一的跨模态双向生成模型,通过自回归生成模式对图像生成和文本生成任务进行统一建模,更好地捕捉模态间的语义对齐关系,从而同时提升图文双向生成任务的效果。文心 ERNIE-ViLG 在文本生成图像的权威公开数据集 MS-COCO 上,图片质量评估指标 FID(Fréchet Inception Distance)远超 OpenAI 的DALL-E等同类模型,并刷新了图像描述多项任务的最好效果。此外,文心ERNIE-ViLG还凭借强大的跨模态理解能力,在生成式视觉问答任务上也取得了领先成绩。\n",
"\n",
"## 3. 模型快速使用\n",
"### 接口说明\n",
"\n",
"ERNIE-ViLG跨模态文生图:基于文心ERNIE-ViLG大模型,根据用户输入的文本,自动创作图像。\n",
"\n",
"* 温馨提示:\n",
"\n",
"每个账户每日免费请求ERNIE-ViLG API服务的上限为100条输入,免费请求额度共500条输入。如需提额,请在[合作咨询](https://wenxin.baidu.com/wenxin/apply)的需求描述里填写您的购买需求。\n",
"\n",
"### 获取API Key\n",
"\n",
"您可以登录大模型开放API获取您的专属获取您的专属 API Key(AK)和Secret Key(SK),点击[链接](https://wenxin.baidu.com/moduleApi/key)查看您的AK和SK。请注意保护您的密钥信息,避免泄露。您可以通过删除已泄露的密钥来保护您的账户安全。\n",
"\n",
"![image.png](https://bce.bdstatic.com/doc/AIDP/wenxin/image_9f4929b.png)\n",
"\n",
"### 调用方式\n",
"\n",
"#### 在本地Python环境调用接口 "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# 安装wenxin-api工具包\n",
"# 注意:建议python版本在python 3.7及以上版本\n",
"!pip install --upgrade wenxin-api"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"* 如何调用"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# -*- coding: utf-8 -*\n",
"import wenxin_api # 可以通过\"pip install wenxin-api\"命令安装\n",
"from wenxin_api.tasks.text_to_image import TextToImage\n",
"wenxin_api.ak = \"your_ak\"\n",
"wenxin_api.sk = \"your_sk\"\n",
"input_dict = {\n",
" \"text\": \"睡莲\",\n",
" \"style\": \"油画\"\n",
"}\n",
"rst = TextToImage.create(**input_dict)\n",
"print(rst)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"* 等待过程返回信息\n",
"```python\n",
"2022-08-12 02:23:15,488 - model is painting now!, taskId: 1023101, waiting: 2m\n",
"2022-08-12 02:23:35,641 - model is painting now!, taskId: 1023101, waiting: 1m\n",
"2022-08-12 02:23:55,982 - model is painting now!, taskId: 1023101, waiting: 1m\n",
"```\n",
"\n",
"* 返回结果示例\n",
"\n",
"```python\n",
"{\n",
" \"imgUrls\":[\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbex\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbi4\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fb5q\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fb30\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbv9\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fba2\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbbf\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbms\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbu7\",\n",
" \"https://wenxin.baidu.com/younger/file/ERNIE-ViLG/61157afdaef4f0dfef0d5e51459160fbct\"\n",
" ]\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"\n",
"\n",
"## 4. 模型效果\n",
"以下是ERNIE ViLG生成图片效果\n",
"\n",
"prompt:震撼的科幻插图,神秘宇宙背景,一只巨大的星球, 大场景,超高清,未来主义\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/aafe6e178a0e471c87a66bbe12ad86df24c5a7df607e4dc79b9a579ef7b2f61e)\n",
"\n",
"prompt:二次元 少女 梦幻 长袍 冰霜 帅气 画师krenz,二次元\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/c5a289eb93084d709b7fcf5f88202d5e4dc6f64b17c245d9a6232824ee44f2de)\n",
"\n",
"prompt:浮世绘日本科幻幻想哑光绘画,动漫风格神道寺禅园英雄动作序列,包豪斯,概念艺术\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/7da123c7c1d24cefab9304f35cebb5d4bf22b63e85b5427cb3f8f08df74acd94)\n",
"\n",
"## 5. 使用方案\n",
"\n",
"\n",
"#### 通过飞桨旸谷社区在线体验\n",
"\n",
"通过飞桨旸谷社区在线体验 ERNIE-ViLG的文生图能力。\n",
"\n",
"#### 通过 API 调用体验\n",
"\n",
"ERNIE-ViLG 提供API体验调用的入口,您也可以在飞桨旸谷社区 API 体验专区申请 AK、SK 进行接口调用体验。"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
---
Model_Info:
name: "PINN_CFD"
description: "基于PINN的2D圆柱绕流模型设计及仿真验证"
description_en: "Design and simulation of 2D cylinder flow model"
icon: "@后续UE统一设计之后,会存到bos上某个位置"
from_repo: "PaddleScience"
Task:
-
tag: "科学计算"
tag_en: "Scientific Computing"
sub_tag: "2D圆柱绕流"
sub_tag_en: "2D cylinder flow"
Example:
-
tag: "工业/能源"
tag_en: "Industrial/Energy"
sub_tag: "圆柱绕流"
sub_tag_en: "Cylinder Flow"
title: "基于AI求解2D非定常圆柱绕流,真的很流体"
url: https://aistudio.baidu.com/aistudio/projectdetail/4178470?contributionType=1
url_en: https://aistudio.baidu.com/aistudio/projectdetail/4178470?contributionType=1
Datasets: "cylinder2D_continuous"
Pulisher: "Baidu"
License: "apache.2.0"
IfTraining: 1
IfOnlineDemo: 1
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. 2D cylinder flow模型简介\n",
"【[圆柱绕流](https://baike.baidu.com/item/%E5%9C%86%E6%9F%B1%E7%BB%95%E6%B5%81/4949598?fr=aladdin)】是指二维圆柱低速定常绕流的流型只与Re数有关。在Re≤1时,流场中的惯性力与粘性力相比居次要地位,圆柱上下游的流线前后对称,阻力系数近似与Re成反比(阻力系数为10~60),此Re数范围的绕流称为斯托克斯区;随着Re的增大,圆柱上下游的流线逐渐失去对称性。\n",
"\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/cad237431534419ba2c771727a2e0c9a5be0f0c75ddb4ab49245ff8321014b16)\n",
"\n",
" --------摘自《神奇的流体》,宫华胜"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 2. 模型效果及应用场景\n",
"## 2.1. 背景介绍\n",
"简单引入科学计算概念以及CFD领域相关概念,同时模型效果预览如下(基于PINN的流场结果为AI for Science结果)\n",
"* 基于CFD专业软件OpenFoam的流场水平速度\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/8c5b6a6eed6a405990ecf069103c5133e3969246b2e3417ba6649d43e1751a22)\n",
"\n",
"* 基于PINN的流场水平速度\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/7a3748202dad42878e053b03fddab1f1627d5d5c9bf348e78c17cc62751fd5f8)\n",
"\n",
"### 2.1.1. 科学计算\n",
"\n",
"【AI For Science】当前AI技术在CV、NLP等领域已有了较为广泛的应用,替代传统方法完成缺陷检测、人脸检测、物体分割、阅读理解、文本生成等任务,在产业界也形成了规模化的落地。但是放眼到更加广阔的工业设计、制造等领域,仍有诸多科学和工程问题亟待解决。比如对于高层建筑结构、大跨桥梁、海上石油平台、航空飞机等,流体和结构的复杂相互作用会引起动力荷载,进而导致抖振、涡振、驰振、颤振等流致振动,影响结构安全与服役年限。数值模拟是研究工程结构流致振动的有效方法之一,但是传统数值方法需要大量的计算资源,在计算速度上有很大的局限性等等。\n",
"\n",
"在科学计算领域,往往需要针对海洋气象、能源材料、航空航天、生物制药等具体场景中的物理问题进行模拟。由于大多数物理规律可以表达为偏微分方程的形式,所以偏微分方程组的求解成为了解决科学计算领域问题的关键。神经网络具备“万能逼近”的能力,即只要网络有足够多的神经元,就可以充分地逼近任意一个连续函数。所以使用AI方法解决科学计算问题的一个思路是训练神经网络以模拟某个偏微分方程组的解函数。\n",
"\n",
"详见[飞桨加速CFD(计算流体力学)原理与实践](https://mp.weixin.qq.com/s/pQtyKNOH2g-pyO7AqMuQmw)\n",
"\n",
"\n",
"【飞桨科学计算工具组件PaddleScience】旨在加速求解高维的数学物理方程,综合数学计算和物理数据的处理方法,提供数据驱动以及物理机理约束的深度学习求解模型,解决CFD/CAE等多物理场跨尺度模拟的难点。现提供泛化的微分方程、PINNs(物理信息神经网络)和FNO(傅立叶神经算子)等求解器,并提供圆柱绕流、涡激振动等典型计算流体力学案例。综合应用AI与数据结合的科学研究新范式,突破传统科学计算面临的维数高、时间长、跨尺度的挑战,提升智能制造系统设计、建模仿真、分析优化等技术。\n",
"\n",
"\n",
"\n",
"\n",
"相信大家能看到这里的一定也是行业内的专家,关于这些流体的基础概念本文就不做多说了。\n",
"\n",
"### 2.1.2. PINN方法\n",
"\n",
"【PINN】基于物理信息的神经网络 (Physics Informed Neural Network,简称PINN) 是一种科学机器在传统数值领域的应用方法,特别是用于解决与偏微分方程 (PDE) 相关的各种问题,包括方程求解、参数反演、模型发现、控制与优化等。\n",
"\n",
"其实,简单点说就是在AI中神经网络多了我们物理学科中的控制方程作为约束,并将我们之前CFD中求解的时空信息,作为训练点,在使**网络输出尽可能的符合初边值与控制方程约束的过程中不断优化网络**。这个网络就是我们拟合的CFD求解器。\n",
"\n",
"这种想法不是近年才有的,但是确实是随着深度学习的发展逐渐应用到了CFD或者传统的物理仿真上。\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 3. 模型如何使用\n",
"基于深度学习,采用PINN方法求解圆柱绕流问题主要由以下几步构成,前提是我们需要借助一个比较成熟的深度学习框架,当然我一定会用百度的[PaddlePaddle](https://www.paddlepaddle.org.cn/)框架🤣,事实证明也是比较nice的。\n",
"\n",
"百度飞桨框架提供了自动微分、多类算子以及比较广泛的API。同时百度同学基于飞桨框架,开发了一套开源的科学计算套件[PaddleScience](https://github.com/PaddlePaddle/PaddleScience),里面提供了一些基础的DEMO,也为大家后期进行深一步的工程探索提供了良好的条件。\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/dd7ec9aeeef94ad9bad3768f370a8ded9e9e86dc9ec841cebc50a339adff1e71)\n",
"\n",
"\n",
"言归正传,接下来给搭建step by step的介绍本项目的实现流程。\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.1. 环境配置\n",
"### 3.1.1. 安装Paddle-gpu-develop\n",
"在进行AI求解圆柱绕流问题前,我们需要先明确深度学习的框架以及科学计算套件等环境。需要注意的是:\n",
"* 当前PaddleScience套件所应用的深度学习框架为Paddle2.3.2版本,可详见[paddle安装](https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/develop/install/pip/linux-pip.html)\n",
"\n",
"对于当前示例运行环境,安装代码如下:\n",
"```\n",
"!python -m pip install paddlepaddle-gpu==2.3.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html\n",
"```\n",
"\n",
"### 3.1.2. 安装PaddleScience及依赖项\n",
"可从git克隆到目标路径,同时也可以下载后upload.\n",
"cd至PaddleScience路径,安装依赖项,基于本示例,代码如下:\n",
"\n",
"```\n",
"%cd ~/work/PaddleScience_CubeDomain/refactor_PaddleScience_0430/\n",
"!pip install -r requirements.txt\n",
"```\n",
"\n",
"### 3.1.3. 环境变量定义\n",
"代码示意如下:\n",
"```\n",
"%env PYTHONPATH=/home/aistudio/work/PaddleScience_CubeDomain/refactor_PaddleScience_0430\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"import paddle\n",
"!paddle --version"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [],
"source": [
"!python -m pip install paddlepaddle-gpu==2.3.2.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html\n",
"%cd ~/work/PaddleScience_CubeDomain/refactor_PaddleScience_0430/\n",
"!pip install -r requirements.txt\n",
"%env PYTHONPATH=/home/aistudio/work/PaddleScience_CubeDomain/refactor_PaddleScience_0430"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"环境配置完成后,log部分示意如下:\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/843466c039794e21a17beaeec45c02e2e9e7500fbdf2401c961a682d1be3e380)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"## 3.2. 数据集导入\n",
"\n",
"神经网络的训练过程需要大量的训练数据集,我们将CFD计算中的时间t, 空间(x, y)作为训练集对网络进行训练,本案例采用了半监督的方式,即基于部分CFD计算的结果来监督网络的训练过程(目的也是加速网络的收敛,后期逐渐开放结果)\n",
"\n",
">本项目直接将openfoam中定义的网格信息作为神经网络的训练点,并使用load_cfd_data模块加载数据\n",
"\n",
"结合CFD求解,训练数据同样也会按照初始值、边界条件、流体域内空间点等进行区分,分别定义了如下分类数据,数据保存在./data_0430文件夹下:\n",
"* 域内训练点:\n",
">domain_train ----流体域内\n",
"\n",
"* 边界训练点:\n",
"当前采用了速度入口、压力出口以及无滑移圆柱边界,并以Re=100进行分析,以下数据用于评估bc_loss.\n",
">domain_inlet ----流体域入口 \n",
">domain_outlet ----流体域出口 \n",
">domain_cylinder ----圆柱周边\n",
"\n",
"* 初始值:\n",
"采样相对初始时刻作为0时刻,从openfoam提取部分位置的流场信息作为神经网络的初始值监督结果,数据保存在./data_0430/initial文件夹下,用于评估ic_loss.\n",
"\n",
"\n",
"同时,也从openfoam中提取了流场某些关键位置的速度、压力信息等作为网络训练的监督数据。数据保存在./data_0430/probe文件夹下。\n",
" \n",
"\n",
"数据加载可采用如下模块实现:\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/52bd00787a1547269cde66230d2d4455edc16d1091b341918002007757cad60d)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.3. 模型训练\n",
"\n",
"基于数据导入模块、网络及训练定义模块等,可直接运行2d_unsteady_cylinder_train.py执行圆柱绕流模型的训练。\n",
"针对该示例,在V100-32G GPU显卡上,预估训练100000次左右(8h+)能够得到比较理想的结果。\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!python examples/cylinder/2d_unsteady_cylinder_train.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"训练过程log部分示意:\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/ffc6273e1f284493970d2690ce667dec2a4e593b714e42678199e5c727549c62)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"## 3.4. 模型预测\n",
"结合训练完成后保存的网络,直接进行一次前向计算,即可得到训练过程中所有时刻的结果,只需执行如下代码,生成的结果会保存在vtk文件夹中,可通过paraview打开。\n",
"```\n",
"!python examples/cylinder/2d_unsteady_cylinder_predict.py\n",
"```\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!python examples/cylinder/2d_unsteady_cylinder_predict.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.5. 精度分析\n",
"针对PINN中计算的流场结果,与OpenFoam结果对比如下:\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/04b6593bcc9443f5b7a605d1fb1719e7f9d5dc37e32a4a26ab4b31f6f0e5248a)\n",
"\n",
"\n",
"从对比结果可知,基于AI方式求解的cfd圆柱绕流模型,在大部分区域结果精度已经非常高。\n",
"但在个别位置,如边界层上则相对误差较大。由于边界层一直是流体中比较复杂的问题,也说明了AI For CFD还有很大的改进空间。\n",
"\n",
"\n",
"***以上,欢迎各位AI+Science的小伙伴探讨交流并加入飞桨科学计算的[共创计划](http://www.paddlepaddle.org.cn/science),也非常欢迎大家基于飞桨框架以及飞桨科学计算组件探索以及共享一些更有价值的DEMO!***\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 4. 模型原理\n",
"## 4.1. 物理抽象\n",
"\n",
"* 给定计算域内的圆柱绕流\n",
"边界条件:\n",
"\n",
"|边界|类型|\n",
"| -------- | ------ |\n",
"| 入口| 速度入口 |\n",
"| 出口 | 零压 |\n",
"| 圆柱与上下壁面 | 无滑移 |\n",
"\n",
"* 初始值:\n",
"\n",
"计算域内采用相对初始时刻的流场信息,相对初始时刻可选择流场稳定的某一时刻\n",
"\n",
"\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/4b050e64262f4d86b5099c70f3cdf5e62c2ec5e787d64e28b0079f2e7d9ea66f)\n",
"\n",
"## 4.2. 物性参数定义\n",
"本示例针对不可压缩流体,并采用2D非定常时间连续方式的流体连续性方程以及动量方程作为控制方程。\n",
"模型中圆柱半径为1,针对Re=100的工况,定义入口流速为1,可调节粘度定义为2e-2。无量纲化的N-S方程示意如下:\n",
"\n",
"## 4.3. 控制方程(损失函数)\n",
"![](https://ai-studio-static-online.cdn.bcebos.com/aa303caf72c54029a6c21340c31e7791d10a4b2c585e4091aaa92e40f32b6fa0)\n",
"\n",
"\n",
"同时,按照速度入口、压力出口、无滑移圆柱边界等定义边界条件,并定义初始流场信息为相对0时刻,即流场稳定建立后的某一个相对时刻。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## 4.4. 神经网络\n",
"\n",
"采用PINN的方法,并选择FCNet作为深度学习的神经网络,定义代码如下,并在Train模块中直接调用即可。\n",
"```\n",
"PINN = psolver.PysicsInformedNeuralNetwork(\n",
" layers=6, nu=2e-2, bc_weight=10, eq_weight=1, ic_weight=10, supervised_data_weight=10, \n",
" outlet_weight=1, training_type='half-supervised', net_params=net_params)\n",
"```\n",
"\n",
"本示例采用Adam优化器求解,关于超参数的定义如下所示:\n",
"```\n",
"adm_opt = psci.optimizer.Adam(learning_rate=1e-4, weight_decay=0.9,parameters=PINN.net.parameters())\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
},
"toc-autonumbering": true
},
"nbformat": 4,
"nbformat_minor": 4
}
# 模型列表
=======
## 1. 训练Benchmark ## 1. 训练Benchmark
### 1.1 软硬件环境 ### 1.1 软硬件环境
...@@ -10,6 +14,7 @@ ...@@ -10,6 +14,7 @@
* 人像抠图使用内部数据。 * 人像抠图使用内部数据。
### 1.3 指标 ### 1.3 指标
|模型名称 | 模型简介 | 输入尺寸 | |模型名称 | 模型简介 | 输入尺寸 |
|---|---|---| |---|---|---|
|ppmatting_hrnet_w48 | 通用目标抠图 | 512 | |ppmatting_hrnet_w48 | 通用目标抠图 | 512 |
...@@ -22,6 +27,7 @@ ...@@ -22,6 +27,7 @@
* 模型推理速度测试采用单卡V100,batch size=1进行测试,使用CUDA 10.2, CUDNN 7.6.5, PaddlePaddle-gpu 2.3.2。 * 模型推理速度测试采用单卡V100,batch size=1进行测试,使用CUDA 10.2, CUDNN 7.6.5, PaddlePaddle-gpu 2.3.2。
### 2.2 数据集 ### 2.2 数据集
* 通用目标抠图:Compositon-1k或Distinctions-646中的测试集部分。 * 通用目标抠图:Compositon-1k或Distinctions-646中的测试集部分。
* 人像抠图:PPM-100和AIM-500中的人像部分,共195张, 记为PPM-AIM-195。 * 人像抠图:PPM-100和AIM-500中的人像部分,共195张, 记为PPM-AIM-195。
......
...@@ -153,7 +153,11 @@ ...@@ -153,7 +153,11 @@
"* 分支设计,明确语义预测和细节预测任务。\n", "* 分支设计,明确语义预测和细节预测任务。\n",
"* SCB(Semantic Context Branch)分支进行语义预测,保证图像整体预测正确性,其将图像粗略的分为三个部分,即前景、背景和过度区域。\n", "* SCB(Semantic Context Branch)分支进行语义预测,保证图像整体预测正确性,其将图像粗略的分为三个部分,即前景、背景和过度区域。\n",
"* HRDB(High-Resolution Detail Branch)维持高分辨率特征提取过程,保证细节不受损失。\n", "* HRDB(High-Resolution Detail Branch)维持高分辨率特征提取过程,保证细节不受损失。\n",
<<<<<<< HEAD
"* 引导流结构设计(Guidance Flow)使HRBD分支获取SCB分支提取的语义信息,使HRDB分支能专注与过度区域的细节预测。\n"
=======
"* 引导流结构设计(Guidance Flow)使HRDB分支获取SCB分支提取的语义信息,使HRDB分支能专注与过度区域的细节预测。\n" "* 引导流结构设计(Guidance Flow)使HRDB分支获取SCB分支提取的语义信息,使HRDB分支能专注与过度区域的细节预测。\n"
>>>>>>> b352b78ea8a7ca2c7e0c87c4b4bcfe695ebd70e3
] ]
}, },
{ {
......
---
Model_Info:
name: "VIMER-CAE1.0"
description: "自监督预训练方法"
description_en: "masked image modeling approach for self-supervised representation pretraining"
icon: ""
from_repo: "VIMER"
Task:
- tag_en: "Wenxin Big Models"
tag: "文心大模型"
sub_tag_en: "VIMER-CAE"
sub_tag: "自监督预训练"
Datasets: "ImageNet1K, MSCOCO, ADE20K"
Pulisher: "Baidu"
License: "apache.2.0"
Paper:
- title: "Context Autoencoder for Self-Supervised Representation Learning"
url: "https://arxiv.org/abs/2202.03026"
IfTraining: 0
IfOnlineDemo: 1
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"1、模型简介\n",
"\n",
"Mask Image Modeling (MIM) 方法,在 NLP 领域 (例如BERT) 得到了广泛的应用。随着 ViT 的提出和发展,人们也尝试将 MIM 应用到视觉领域并取得了一定进展。在此之前,视觉自监督算法主要沿着 contrastive learning 的思路去设计,而 MIM 打开了新的大门。“Context Autoencoder for Self-Supervised Representation Learning”,即VEMER-CAE,提出一种新的 MIM 方法 CAE。VIMER-CAE基于自监督图像掩码恢复原理,创新性地提出“图像表征和掩码预测独立学习”的预训练框架,通过编码模块对输入的图像块进行特征表达,并利用隐式上下文回归和解码模块对输入图像的掩码块进行特征表达恢复,在恢复建模问题上提高了预训练模型的视觉表征能力。基于自监督方法训练的预训练模型在下游各类图像任务上取得了明显的效果提升,其中在目标检测、实力分割、语义分割等任务的指标上达到SOT R效果。\n",
"\n",
"<div align=center><img src=\"https://github.com/PaddlePaddle/VIMER/blob/main/CAE/figs/CAE2.png\" width=\"60%\"></div>\n",
"\n",
"\n",
"2、模型效果\n",
"\n",
"1)分类场景ImageNet-1K数据集上的结果 \n",
"\n",
"| model | pretrain | Linear Prob(Top-1) | Attentive Prob(Top-1) | Finetune(Top-1) \n",
"|:--------:|:--------:|:--------:|:--------:| :--------:| \n",
"| [Vit-Base](https://vimer.bj.bcebos.com/CAE/pt_ep800_fp32_checkpoint-799.pd) | 800e | 69.3% | 76.7% | 83.7% | \n",
"| Vit-Large | 1600e | 78.1% | 81.2% | 86.3% |\n",
"\n",
"\n",
"2)分割场景ADE20K数据集上的结果\n",
"\n",
"| Backbone | Method | Crop Size | Lr Schd | mIoU | #params | FLOPs | \n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | \n",
"| Vit-Base-800e | UperNet | 512x512 | 160K | 49.69 | 81M | 1038G |\n",
"\n",
"\n",
"\n",
"\n",
"## Citing Context Autoencoder for Self-Supervised Representation Learning\n",
"\n",
"@article{chen2022context,\n",
" title={Context autoencoder for self-supervised representation learning},\n",
" author={Chen, Xiaokang and Ding, Mingyu and Wang, Xiaodi and Xin, Ying and Mo, Shentong and Wang, Yunhao and Han, Shumin and Luo, Ping and Zeng, Gang and Wang, Jingdong},\n",
" journal={arXiv preprint arXiv:2202.03026},\n",
" year={2022}\n",
"}\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"# 查看工作区文件, 该目录下的变更将会持久保存. 请及时清理不必要的文件, 避免加载过慢.\n",
"# View personal work directory. \n",
"# All changes under this directory will be kept even after reset. \n",
"# Please clean unnecessary files in time to speed up environment loading. \n",
"!ls /home/aistudio/work"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"# 如果需要进行持久化安装, 需要使用持久化路径, 如下方代码示例:\n",
"# If a persistence installation is required, \n",
"# you need to use the persistence path as the following: \n",
"!mkdir /home/aistudio/external-libraries\n",
"!pip install beautifulsoup4 -t /home/aistudio/external-libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"# 同时添加如下代码, 这样每次环境(kernel)启动的时候只要运行下方代码即可: \n",
"# Also add the following code, \n",
"# so that every time the environment (kernel) starts, \n",
"# just run the following code: \n",
"import sys \n",
"sys.path.append('/home/aistudio/external-libraries')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"请点击[此处](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576)查看本环境基本用法. <br>\n",
"Please click [here ](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576) for more detailed instructions. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"1、模型简介\n",
"\n",
"Mask Image Modeling (MIM) 方法,在 NLP 领域 (例如BERT) 得到了广泛的应用。随着 ViT 的提出和发展,人们也尝试将 MIM 应用到视觉领域并取得了一定进展。在此之前,视觉自监督算法主要沿着 contrastive learning 的思路去设计,而 MIM 打开了新的大门。“Context Autoencoder for Self-Supervised Representation Learning”,即VEMER-CAE,提出一种新的 MIM 方法 CAE。VIMER-CAE基于自监督图像掩码恢复原理,创新性地提出“图像表征和掩码预测独立学习”的预训练框架,通过编码模块对输入的图像块进行特征表达,并利用隐式上下文回归和解码模块对输入图像的掩码块进行特征表达恢复,在恢复建模问题上提高了预训练模型的视觉表征能力。基于自监督方法训练的预训练模型在下游各类图像任务上取得了明显的效果提升,其中在目标检测、实力分割、语义分割等任务的指标上达到SOT R效果。\n",
"\n",
"<div align=center><img src=\"https://github.com/PaddlePaddle/VIMER/blob/main/CAE/figs/CAE2.png\" width=\"60%\"></div>\n",
"\n",
"\n",
"2、模型效果\n",
"\n",
"1)分类场景ImageNet-1K数据集上的结果 \n",
"\n",
"| model | pretrain | Linear Prob(Top-1) | Attentive Prob(Top-1) | Finetune(Top-1) \n",
"|:--------:|:--------:|:--------:|:--------:| :--------:| \n",
"| [Vit-Base](https://vimer.bj.bcebos.com/CAE/pt_ep800_fp32_checkpoint-799.pd) | 800e | 69.3% | 76.7% | 83.7% | \n",
"| Vit-Large | 1600e | 78.1% | 81.2% | 86.3% |\n",
"\n",
"\n",
"2)分割场景ADE20K数据集上的结果\n",
"\n",
"| Backbone | Method | Crop Size | Lr Schd | mIoU | #params | FLOPs | \n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | \n",
"| Vit-Base-800e | UperNet | 512x512 | 160K | 49.69 | 81M | 1038G |\n",
"\n",
"\n",
"\n",
"\n",
"## Citing Context Autoencoder for Self-Supervised Representation Learning\n",
"\n",
"@article{chen2022context,\n",
" title={Context autoencoder for self-supervised representation learning},\n",
" author={Chen, Xiaokang and Ding, Mingyu and Wang, Xiaodi and Xin, Ying and Mo, Shentong and Wang, Yunhao and Han, Shumin and Luo, Ping and Zeng, Gang and Wang, Jingdong},\n",
" journal={arXiv preprint arXiv:2202.03026},\n",
" year={2022}\n",
"}\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"# 查看工作区文件, 该目录下的变更将会持久保存. 请及时清理不必要的文件, 避免加载过慢.\n",
"# View personal work directory. \n",
"# All changes under this directory will be kept even after reset. \n",
"# Please clean unnecessary files in time to speed up environment loading. \n",
"!ls /home/aistudio/work"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"# 如果需要进行持久化安装, 需要使用持久化路径, 如下方代码示例:\n",
"# If a persistence installation is required, \n",
"# you need to use the persistence path as the following: \n",
"!mkdir /home/aistudio/external-libraries\n",
"!pip install beautifulsoup4 -t /home/aistudio/external-libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
"scrolled": true
},
"outputs": [],
"source": [
"# 同时添加如下代码, 这样每次环境(kernel)启动的时候只要运行下方代码即可: \n",
"# Also add the following code, \n",
"# so that every time the environment (kernel) starts, \n",
"# just run the following code: \n",
"import sys \n",
"sys.path.append('/home/aistudio/external-libraries')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"请点击[此处](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576)查看本环境基本用法. <br>\n",
"Please click [here ](https://ai.baidu.com/docs#/AIStudio_Project_Notebook/a38e5576) for more detailed instructions. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "py35-paddle1.2.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册