未验证 提交 4b455178 编写于 作者: W wangqun 提交者: GitHub

Merge pull request #1 from wangqunbaidu/master

[D][feature]paddleJS提交代码仓库
[中文版](https://github.com/PaddlePaddle/Paddle-Lite/blob/develop/web/README_cn.md)
# Web
Web project is an open source deep learning framework designed to work on web browser. It could run on nearly every browser with WebGL support.
## Key Features
### Modular
Web project is built on Atom system which is a versatile framework to support GPGPU operation on WebGL. It is quite modular and could be used to make computation tasks faster by utilizing WebGL.
### High Performance
Web project could run TinyYolo model in less than 30ms on chrome. This is fast enough to run deep learning models in many realtime scenarios.
### Browser Coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## How To Build & Deploy Demo
```bash
cd web # enter root directory for web project
npm i # install dependencies for npm
mkdir dist # create deployment directory
cd dist # enter deployment directory
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # get models
mv Paddle-Web-Models/separablemodel . # move models to specific directory
cd .. # return to root directory for web project
npm run testVideoDemo # start demo
```
## How To Preview Demo
1. Open chrome with url: https://localhost:8123/
2. Start demo by pressing the 【start detection】 button.
3. Ensure at least one face is recorded by the camera. The face detection rectangle should be displayed if everything goes fine.
## Feedback and Community Support
- Questions, reports, and suggestions are welcome through Github Issues!
- Forum: Opinions and questions are welcome at our [PaddlePaddle Forum](https://ai.baidu.com/forum/topic/list/168)
- QQ group chat: 696965088
# Paddle-JS
\ No newline at end of file
[中文版](https://github.com/PaddlePaddle/Paddle-Lite/blob/develop/web/README_cn.md)
# Web
Paddle.js is an Web project for Baidu Paddle, which is an an open source deep learning framework designed to work on web browser. Load a pretrained paddle.js SavedModel or Paddle Hub module into the browser and run inference through Paddle.js. It could run on nearly every browser with WebGL support.
## Key Features
### Modular
Web project is built on Atom system which is a versatile framework to support GPGPU operation on WebGL. It is quite modular and could be used to make computation tasks faster by utilizing WebGL.
### High Performance
Web project could run TinyYolo model in less than 30ms on chrome. This is fast enough to run deep learning models in many realtime scenarios.
### Browser Coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
### Supported operations
Currently Paddle.js only supports a limited set of Paddle Ops. See the full list. If your model uses unsupported ops, the Paddle.js script will fail and produce a list of the unsupported ops in your model. Please file issues to let us know what ops you need support with.
[Supported operations Pages](./src/factory/fshader/README.md)
## Loading and running in the browser
If the original model was a SavedModel, use paddle.load().
```bash
import * as tf from 'paddlejs';
let feed = io.process({
input: document.getElementById('image'),
params: {
gapFillWith: '#000', // What to use to fill the square part after zooming
targetSize: {
height: fw,
width: fh
},
targetShape: [1, 3, fh, fw], // Target shape changed its name to be compatible with previous logic
// shape: [3, 608, 608], // Preset sensor shape
mean: [117.001, 114.697, 97.404], // Preset mean
// std: [0.229, 0.224, 0.225] // Preset std
}
});
const MODEL_CONFIG = {
dir: `/${path}/`, // model URL
main: 'model.json', // main graph
};
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: true,
dataType: 'binary',
options: {
fileCount: 1, // How many model have been cut
getFileName(i) {
return 'chunk_' + i + '.dat';
}
}
}
});
model = await paddle.load();
//
let inst = model.execute({
input: feed
});
// There should be a fetch execution call or a fetch output
let result = await inst.read();
```
Please see feed documentation for details.
Please see fetch documentation for details.
## Run the converter script provided by the pip package:
The converter expects a Paddlejs SavedModel, Paddle Hub module, Tpaddle.js JSON format for input.
## Web-friendly format
The conversion script above produces 2 types of files:
- model.json (the dataflow graph and weight manifest file)
- group1-shard\*of\* (collection of binary weight files)
## Preview Demo
Paddle.js has some pre-converted models to Paddle.js format .There are some demos in the following URL, open a browser page with the demo.
[Supported Demo Pages](./examples/README.md)
## Feedback and Community Support
- Questions, reports, and suggestions are welcome through Github Issues!
- Forum: Opinions and questions are welcome at our [PaddlePaddle Forum](https://ai.baidu.com/forum/topic/list/168)
- QQ group chat: 696965088
# PaddleJS Examples
百度PaddleJS使用现成的或者通过转换工具转换的 JavaScript 友好的paddle模型以在浏览器中运行,在浏览器中实现在线推理能力。
## 演示
目前Web项目运行TinyYolo模型可以达到30ms以内,对于一般的实时场景已经足够应对。
### 模块化
## 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## 构建部署
```bash
cd web # 进入根目录
npm i # 安装依赖
mkdir dist # 创建资源目录
cd dist # 进入资源目录
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # 获取模型
mv Paddle-Web-Models/separablemodel . # 移动模型到制定地点
cd .. # 返回根目录
npm run tinyYolo # 启动 tinyYolo 在线推理服务
```
## 如何预览 demo
1. 在浏览器中打开url: https://localhost:8123/
2. 点击【开始检测】按钮。
3. 将人脸对准摄像头,没有问题的话,可以正常检测到人脸。
## 效果
![image](./tinyYolo/demoshow.png)
# Web
该Web项目是致力于在浏览器中运行的开源深度学习框架,在支持WebGL的浏览器上即可直接运行。
## 主要特点
### 模块化
该Web项目建立于Atom组件之上。Atom组件在WebGL基础上进行了封装,可以方便的进行通用GPU计算任务。它是高度模块化的,不仅可以用于本项目,也可以用于其它的WebGL加速场景。
### 高性能
目前Web项目运行TinyYolo模型可以达到30ms以内,对于一般的实时场景已经足够应对。
### 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## 如何构建部署 demo
```bash
cd web # 进入根目录
npm i # 安装依赖
mkdir dist # 创建资源目录
cd dist # 进入资源目录
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # 获取模型
mv Paddle-Web-Models/separablemodel . # 移动模型到制定地点
cd .. # 返回根目录
npm run testVideoDemo # 启动 demo 服务
```
## 如何预览 demo
1. 在浏览器中打开url: https://localhost:8123/
2. 点击【开始检测】按钮。
3. 将人脸对准摄像头,没有问题的话,可以正常检测到人脸。
## 交流与反馈
* 欢迎您通过Github Issues来提交问题、报告与建议
* QQ群: 696965088
* 论坛: 欢迎大家在[PaddlePaddle论坛](https://ai.baidu.com/forum/topic/list/168)分享在使用PaddlePaddle中遇到的问题和经验, 营造良好的论坛氛围
[中文版](./README_cn.md)
# PaddleJS Examples
Baidu paddlejs uses the ready-made JavaScript model or transforms the paddle model to run in the browser.
## Demonstration
At present, tiny Yolo model can run within 30ms for web project, which is enough for general real-time scenarios.
## Browser coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## Building deployment
```bash
cd web # Go to root
npm i # Installation dependency
mkdir dist # Create resource directory
cd dist # Enter resource directory
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # Get models
mv Paddle-Web-Models/separablemodel . # Move the model to the designated location
cd .. # Root directory
npm run tinyYolo # run tinyYolo
```
## Preview
1. Open url: https://localhost:端口号/
2. Click the upload picture button。
## Result
![image](./tinyYolo/demoshow.png)
# PaddleJS Examples
百度PaddleJS使用现成的 JavaScript 模型或转换 Paddle 模型以在浏览器中运行。
## 演示
目前Web项目运行TinyYolo模型可以达到30ms以内,对于一般的实时场景已经足够应对。
### 模块化
## 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## 构建部署
```bash
cd web # 进入根目录
npm i # 安装依赖
mkdir dist # 创建资源目录
cd dist # 进入资源目录
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # 获取模型
mv Paddle-Web-Models/separablemodel . # 移动模型到制定地点
cd .. # 返回根目录
npm run tinyYolo # 启动 tinyYolo 在线推理服务
```
## 如何预览 demo
1. 在浏览器中打开url: https://localhost:端口号/
2. 点击【开始检测】按钮。
3. 将人脸对准摄像头,没有问题的话,可以正常检测到人脸。
## 效果
![image](./tinyYolo/demoshow.png)
import 'babel-polyfill';
import Paddle from '../../src/paddle/paddle';
import IO from '../../src/feed/imageFeed';
import Utils from '../../src/utils/utils';
// 获取map表
import Map from '../../test/data/map';
/**
* @file model demo 入口文件
* @author wangqun@baidu.com
*
*/
// 模型feed数据
const feedShape = {
'608': {
fw: 608,
fh: 608
},
'320': {
fw: 320,
fh: 320
},
'320fused': {
fw: 320,
fh: 320
},
'separate': {
fw: 320,
fh: 320
}
};
const modelType = 'separate';
const {fw, fh} = feedShape[modelType];
// 统计参数
let loaded = false;
let model = {};
window.statistic = [];
async function run(input) {
// const input = document.getElementById('mobilenet');
const io = new IO();
let feed = io.process({
input: input,
params: {
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
scale: 256, // 缩放尺寸
width: 224, height: 224, // 压缩宽高
shape: [3, 224, 224], // 预设tensor形状
mean: [0.485, 0.456, 0.406], // 预设期望
std: [0.229, 0.224, 0.225] // 预设方差
}});
console.dir(['feed', feed]);
const path = 'model/huangfan';
if (!loaded) {
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
loaded = true;
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: false,
dataType: 'json'
}
});
model = await paddle.load();
}
let inst = model.execute({
input: feed
});
// 其实这里应该有个fetch的执行调用或者fetch的输出
let result = await inst.read();
console.dir(['result', result]);
let maxItem = Utils.getMaxItem(result);
document.getElementById('txt').innerHTML = Map['' + maxItem.index];
console.log('识别出的结果是' + Map['' + maxItem.index]);
// console.dir(['每个op耗时', window.statistic]);
// let total = statistic.reduce((all, cur) => {
// return all + cur.runTime;
// }, 0);
// console.log('op total = ' + total);
}
var image = '';
function selectImage(file) {
if (!file.files || !file.files[0]) {
return;
}
let reader = new FileReader();
reader.onload = function (evt) {
let img = document.getElementById('image');
img.src = evt.target.result;
img.onload = function () {
run(img);
};
image = evt.target.result;
};
reader.readAsDataURL(file.files[0]);
}
// selectImage
document.getElementById('uploadImg').onchange = function () {
selectImage(this);
};
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>paddle web demo</title>
<meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no">
</head>
<body>
<img id="image" src="https://m.baidu.com/se/static/img/iphone/logo.png" style="max-width: 100%;">
<input type="file" id="uploadImg">
<div id="txt"></div>
<script src="index.es6"></script>
</body>
</html>
\ No newline at end of file
import 'babel-polyfill';
import Paddle from '../../src/paddle/paddle';
import IO from '../../src/feed/imageFeed';
/**
* @file model demo mnist 入口文件
* @author wangqun@baidu.com
*
*/
const pic = document.getElementById('pic');
const io = new IO();
let model = {};
async function run() {
let feed = io.process({
input: pic,
params: {
targetShape: [1, 3, 320, 320], // 目标形状 为了兼容之前的逻辑所以改个名
scale: 256, // 缩放尺寸
width: 224, height: 224, // 压缩宽高
shape: [3, 224, 224], // 预设tensor形状
mean: [0.485, 0.456, 0.406], // 预设期望
std: [0.229, 0.224, 0.225] // 预设方差
}});
console.dir(['feed', feed]);
const path = 'model/mnist';
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: false,
dataType: 'json'
}
});
model = await paddle.load();
let inst = model.execute({
input: feed
});
// 其实这里应该有个fetch的执行调用或者fetch的输出
let result = await inst.read();
// let inst = model.execute({input: cat});
// let res = inst.read();
console.dir(['result', result]);
// var fileDownload = require('js-file-download');
// fileDownload(res, 'result.csv');
}
run();
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>paddle web demo</title>
<meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no">
</head>
<body>
<div>
<img id="pic" src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/wAALCAAcABwBAREA/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/APn+vTPDHwP8TeJ9DtdXiuLCzt7kbo0uWcOU7NgKRgjkc81i+O/hvrPgW8xco1zp7ELHfIm1HYqCRjJIPUc9cHFcbSgEnABJ9BXaafH8Rrrw3NpdjBrkmjohLQLE/l7c5OOPUHgV6Fcw3um/sxXNt4hZo7qW5X7FDdLtlRfOU7QG5zgSH/dPpXhFel/Bzxj4a8H6vfzeILZy86ILe6WLzPI27i3HUZ+XkA9PQ16Pc/Hfw7pM91LaXusa20wDRxSQRQww9eAdob35DfWuNg+Ny67Dfab430SDUNLuQxjW2UK8BwcAZPPOPmyCOvPSvH6KKKK//9k=" >
</div>
<script src="index.es6"></script>
</body>
</html>
import 'babel-polyfill';
import Paddle from '../../src/paddle/paddle';
import IO from '../../src/feed/imageFeed';
import Utils from '../../src/utils/utils';
// 获取map表
import Map from '../../test/data/map';
/**
* @file model demo 入口文件
* @author wangqun@baidu.com
*
*/
// 模型feed数据
const feedShape = {
'608': {
fw: 608,
fh: 608
},
'320': {
fw: 320,
fh: 320
},
'320fused': {
fw: 320,
fh: 320
},
'separate': {
fw: 244,
fh: 244
}
};
const modelType = 'separate';
const {fw, fh} = feedShape[modelType];
// 统计参数
let loaded = false;
let model = {};
window.statistic = [];
async function run(input) {
// const input = document.getElementById('mobilenet');
const io = new IO();
let feed = io.process({
input: input,
params: {
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
scale: 256, // 缩放尺寸
width: 224, height: 224, // 压缩宽高
shape: [3, 224, 224], // 预设tensor形状
mean: [0.485, 0.456, 0.406], // 预设期望
std: [0.229, 0.224, 0.225] // 预设方差
}});
console.log('feed', feed);
const path = 'model/mobileNet';
if (!loaded) {
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
loaded = true;
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: true,
dataType: 'json'
}
});
model = await paddle.load();
}
let inst = model.execute({
input: feed
});
// 其实这里应该有个fetch的执行调用或者fetch的输出
let result = await inst.read();
console.dir(['result', result]);
// let maxItem = Utils.getMaxItem(result);
// document.getElementById('txt').innerHTML = Map['' + maxItem.index];
// console.log('识别出的结果是' + Map['' + maxItem.index]);
// console.dir(['每个op耗时', window.statistic]);
// let total = statistic.reduce((all, cur) => {
// return all + cur.runTime;
// }, 0);
// console.log('op total = ' + total);
};
var image = '';
function selectImage(file) {
if (!file.files || !file.files[0]) {
return;
}
let reader = new FileReader();
reader.onload = function (evt) {
let img = document.getElementById('image');
img.src = evt.target.result;
img.onload = function() {
run(img);
};
image = evt.target.result;
}
reader.readAsDataURL(file.files[0]);
}
// selectImage
document.getElementById("uploadImg").onchange = function () {
selectImage(this);
};
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>paddle web demo</title>
<meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no">
</head>
<body>
<img id="image" src="https://m.baidu.com/se/static/img/iphone/logo.png" style="max-width: 100%;">
<input type="file" id="uploadImg">
<div id="txt"></div>
<script src="index.es6"></script>
</body>
</html>
\ No newline at end of file
export default class Camera {
constructor(option) {
this.video = option.videoDom;
this.videoOption = option.videoOption;
}
// 访问用户媒体设备的兼容方法
getUserMedia(constraints, success, error) {
if (navigator.mediaDevices.getUserMedia) {
// 最新的标准API
navigator.mediaDevices.getUserMedia(constraints).then(success).catch(error);
}
else if (navigator.webkitGetUserMedia) {
// webkit核心浏览器
navigator.webkitGetUserMedia(constraints, success, error);
}
else if (navigator.mozGetUserMedia) {
// firfox浏览器
navigator.mozGetUserMedia(constraints, success, error);
}
else if (navigator.getUserMedia) {
// 旧版API
navigator.getUserMedia(constraints, success, error);
}
}
success(stream) {
// 兼容webkit核心浏览器
let CompatibleURL = window.URL || window.webkitURL;
// 将视频流设置为video元素的源
// video.src = CompatibleURL.createObjectURL(stream);
this.video.srcObject = stream;
this.video.play();
}
error(error) {
console.log(`访问用户媒体设备失败${error.name}, ${error.message}`);
}
run() {
if (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.mediaDevices.getUserMedia) {
// 调用用户媒体设备, 访问摄像头
this.getUserMedia(this.videoOption, this.success.bind(this), this.error);
}
else {
alert('不支持访问用户媒体');
}
}
get curVideo() {
return this.video;
}
}
import 'babel-polyfill';
import Graph from '../../src/executor/loader';
import IO from '../../src/feed/imageFeed';
import Logger from '../../tools/logger';
window.log = new Logger();
// 统计参数
window.badCases = [];
// import Utils from '../src/utils/utils';
// 获取map表
// import Map from '../test/data/map';
// import demoPic from './bbt1.jpg';
// import demoPic2 from './bbt2.jpg';
// import demoPic3 from './bbt3.jpg';
// import demoPic4 from './bbt4.jpg';
// import demoPic5 from './bbt5.jpg';
// 后处理测试用例
// let tempPic = [demoPic, demoPic2, demoPic3, demoPic4, demoPic5];
/**
* @file model demo 入口文件
* @author wangqun@baidu.com
*
*/
// 模型输出shape
const outputShapes = {
'608': {
from: [19, 19, 25, 1],
to: [19, 19, 5, 5]
},
'320': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
},
'320fused': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
}
};
// 模型feed数据
const feedShape = {
'608': {
fw: 608,
fh: 608
},
'320': {
fw: 320,
fh: 320
},
'320fused': {
fw: 320,
fh: 320
}
};
// 模型路径
const modelPath = {
'608': 'faceModel',
'320': 'facemodel320',
'320fused': 'facemodelfused'
};
const modelType = '320fused';
const path = modelPath[modelType];
// 统计参数
let loaded = false;
let model = {};
window.statistic = [];
const {fw, fh} = feedShape[modelType];
// 第一遍执行比较慢 所以预热一下
async function preheat() {
const io = new IO();
let feed = io.process({
input: video,
params: {
gapFillWith: '#000', // 缩放后用什么填充不足方形部分
targetSize: {
height: fw,
width: fh
},
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
// shape: [3, 608, 608], // 预设tensor形状
mean: [117.001, 114.697, 97.404], // 预设期望
// std: [0.229, 0.224, 0.225] // 预设方差
}
});
const MODEL_URL = `/${path}/model.json`;
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
loaded = true;
const graphModel = new Graph();
log.start('加载模型');
model = await graphModel.loadGraphModel(MODEL_CONFIG, {
multipart: true,
dataType: 'binary',
binaryOption: {
fileCount: 1, // 切成了多少文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
},
feed
});
log.end('加载模型');
let inst = model.execute({
input: feed
});
};
async function run(input) {
// const input = document.getElementById('mobilenet');
log.start('总耗时');
const io = new IO();
log.start('预处理');
let feed = io.process({
input: input,
params: {
gapFillWith: '#000', // 缩放后用什么填充不足方形部分
targetSize: {
height: fw,
width: fh
},
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
// shape: [3, 608, 608], // 预设tensor形状
mean: [117.001, 114.697, 97.404], // 预设期望
// std: [0.229, 0.224, 0.225] // 预设方差
}
});
log.end('预处理');
if (!loaded) {
const MODEL_URL = `/${path}/model.json`;
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
loaded = true;
const graphModel = new Graph();
log.start('加载模型');
model = await graphModel.loadGraphModel(MODEL_CONFIG, {
multipart: true,
dataType: 'binary',
binaryOption: {
fileCount: 1, // 切成了多少文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
},
feed
});
log.end('加载模型');
}
log.start('运行耗时');
let inst = model.execute({
input: feed
});
// 其实这里应该有个fetch的执行调用或者fetch的输出
let result = await inst.read();
log.end('后处理-读取数据');
// console.dir(['result', result]);
log.start('后处理-形状调整');
const newData = [];
let newIndex = -1;
const [w, h, c, b] = outputShapes[modelType].from;
// c channel
for (let i = 0; i < c; i++) {
// height channel
for (let j = 0; j < h; j++) {
// width channel
for (let k = 0; k < w; k++) {
// position: (0, 0, 0, 0)
const index = j * (c * h) + k * c + i;
// const index = j * (i * k) + k * i + i;
newData[++newIndex] = result[index];
}
}
}
log.end('后处理-形状调整');
log.start('后处理-画框');
testRun(newData, input);
log.end('后处理-画框');
log.end('后处理');
log.end('总耗时');
};
var image = '';
function selectImage(file) {
if (!file.files || !file.files[0]) {
return;
}
let reader = new FileReader();
reader.onload = function (evt) {
let img = document.getElementById('image');
img.src = evt.target.result;
img.onload = function() {
log.during('每次执行的时间间隔');
run(img);
};
image = evt.target.result;
}
reader.readAsDataURL(file.files[0]);
};
// selectImage
document.getElementById("uploadImg").onchange = function () {
selectImage(this);
};
/* 后处理图片 by zhangmiao06 */
let preTestRun = (index) => {
let img = document.getElementById('image');
img.src = tempPic[index];
img.onload = function() {
testRun(testOutput.data[index], img);
};
};
let testRun = (data, img) => {
// console.log('ori', data);
const {from, to} = outputShapes[modelType];
// let shape = [1, 25, 19, 19];
let shape = [].concat(from).reverse();
// 1.从一维数组到1*25*19*19
let formatData = reshapeMany({
data: data,
reshapeShape: shape
});
// console.log('一维到多维', formatData);
// 2.从1*25*19*19 到 19*19*25*1
let formatData2 = transpose({
data: formatData,
shape: shape,
transposeShape: [2, 3, 1, 0]
});
// console.log('transpose', formatData2);
// 3.从19*19*25*1到19*19*5*5
let formatData3 = reshape({
data: formatData2,
shape: from,
reshapeShape: to
});
// console.log('reshape', formatData3);
// 4.运算
let finalData = handleFinal(formatData3, shape, img);
// console.log('final', finalData);
// 5.处理画布
// handleCanvas(finalData, img);
handleDiv(finalData, img);
};
// sigmoid
let sigmoid = (x) => {
if (x < -100) {
return 0.0;
}
return 1 / (1 + Math.exp(-x));
};
// transpose
let transpose = (data) => {
let shape = data.shape;
let transposeShape = data.transposeShape;
let formatData = data.data;
let formatData2 = [];
for(let n = 0; n < shape[transposeShape[0]]; n++) {
let nData = [];
for(let c = 0; c < shape[transposeShape[1]]; c++) {
let cData = [];
for(let row = 0; row < shape[transposeShape[2]]; row++) {
let rowData = [];
for(let col = 0; col < shape[transposeShape[3]]; col++) {
let tempArr = [n, c, row, col];
let newN = n;
let newC = c;
let newW = row;
let newH = col;
transposeShape.forEach((item, index)=> {
switch(item) {
case 0:
newN = tempArr[index];
break;
case 1:
newC = tempArr[index];
break;
case 2:
newW = tempArr[index];
break;
case 3:
newH = tempArr[index];
}
});
rowData.push(formatData[newN][newC][newW][newH]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData2.push(nData);
}
return formatData2;
};
// reshape
let reshape = (data) =>{
let formatData2 = data.data;
let shape = data.shape;
let reshapeShape = data.reshapeShape;
// 1.变成一维
let tempData = reshapeOne({
data: formatData2,
shape: shape
});
// 2.变成多维
let formatData3 = reshapeMany({
data: tempData,
reshapeShape: reshapeShape
});
return formatData3;
};
// 变成一维
let reshapeOne = (data) => {
let formatData2 = data.data;
let shape = data.shape;
let tempData = [];
for(let n = 0; n < shape[0]; n++) {
for(let c = 0; c < shape[1]; c++) {
for(let row = 0; row < shape[2]; row++) {
for(let col = 0; col < shape[3]; col++) {
tempData.push(formatData2[n][c][row][col]);
}
}
}
}
return tempData;
};
// 变成多维
let reshapeMany = (data) => {
let tempData = data.data;
let reshapeShape = data.reshapeShape;
let formatData3 = [];
for(let n = 0; n < reshapeShape[0]; n++) {
let nData = [];
for(let c = 0; c < reshapeShape[1]; c++) {
let cData = [];
for(let row = 0; row < reshapeShape[2]; row++) {
let rowData = [];
for(let col = 0; col < reshapeShape[3]; col++) {
let tempN = n * reshapeShape[1] * reshapeShape[2] * reshapeShape[3];
let tempC = c * reshapeShape[2] * reshapeShape[3];
let tempRow = row * reshapeShape[3];
rowData.push(tempData[tempN + tempC + tempRow + col]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData3.push(nData);
}
return formatData3;
};
let calSize = (img) => {
let w1 = img.width;
let h1 = img.height;
let wh1 = Math.max(w1, h1);
// let factor = 608.0 / wh1;
let factor = fw / wh1;
let width = Math.round(w1 * factor);
let height = Math.round(h1 * factor);
return [w1, h1, width, height];
};
// 处理运算
let handleFinal = (formatData3, shape, img) => {
let finalData = [];
let c = shape[2];
let [w1, h1, width, height] = calSize(img);
let factorX = Math.max(width, height) / width;
let factorY = Math.max(width, height) / height;
let maxProb = 0.0;
let anchors = [[1.603231, 2.094468], [6.041143, 7.080126], [2.882459, 3.518061], [4.266906, 5.178857], [9.041765, 10.66308]];
for(let i = 0; i < shape[2]; i++) {
for(let j = 0; j < shape[3]; j++) {
for(let k = 0; k < anchors.length; k++) {
let [a1, a2, a3, a4, prob] = formatData3[i][j][k];
prob = sigmoid(prob);
if (prob > maxProb && prob >= 0.5) {
let ctx = (j + sigmoid(a1)) / c * factorX;
let cty = (i + sigmoid(a2)) / c * factorY;
let col = Math.exp(a3) * anchors[k][0] / c * factorX;
let row = Math.exp(a4) * anchors[k][1] / c * factorY;
let x = (ctx - (col / 2));
let y = (cty - (row / 2));
finalData.push([x * w1, y * h1, col * w1, row * h1, prob]);
}
}
}
}
return finalData;
};
// 处理画布
let handleCanvas = (finalData, img) => {
let myCanvas = document.getElementById('myCanvas');
let [w1, h1, width, height] = calSize(img);
myCanvas.width = w1;
myCanvas.height = h1;
let ctx = myCanvas.getContext('2d');
ctx.drawImage(img, 0, 0, w1, h1);
finalData.forEach((demoArr,index) => {
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = demoArr;
ctx.beginPath();
ctx.strokeStyle = 'red';
ctx.moveTo(demoLeft, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop + demoHeight);
ctx.lineTo(demoLeft, demoTop + demoHeight);
ctx.closePath();
ctx.stroke();
});
};
let handleDiv = (finalData, img) => {
if (finalData.length < 1) {
return false;
}
let myCanvas = document.getElementById('myDiv');
let maxIndex = 0;
if (finalData.length > 1) {
for(let i = 1; i < finalData.length; i++) {
if (finalData[i].prob > finalData[maxIndex].prob) {
maxIndex = i;
}
}
}
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = finalData[maxIndex];
myCanvas.style.width = demoWidth;
myCanvas.style.height = demoHeight;
myCanvas.style.left = demoLeft;
myCanvas.style.top = demoTop;
};
// preTestRun(0);
// run(document.getElementById('pic'));
// import VConsole from 'vconsole';
import 'babel-polyfill';
import Paddle from '../../src/paddle/paddle';
import IO from '../../src/feed/imageFeed';
// import Logger from '../../tools/logger';
// window.log = new Logger();
// // 统计参数
// window.badCases = [];
// 后处理测试用例
// let tempPic = [demoPic, demoPic2, demoPic3, demoPic4, demoPic5];
/**
* @file model demo 入口文件
* @author wangqun@baidu.com
*
*/
// 模型输出shape
const outputShapes = {
'608': {
from: [19, 19, 25, 1],
to: [19, 19, 5, 5]
},
'320': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
},
'320fused': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
},
'tinyYolo': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
}
};
// 模型feed数据
const feedShape = {
'608': {
fw: 608,
fh: 608
},
'320': {
fw: 320,
fh: 320
},
'320fused': {
fw: 320,
fh: 320
},
'tinyYolo': {
fw: 320,
fh: 320
}
};
// 模型路径
const modelPath = {
'tinyYolo': 'model/tinyYolo'
};
const modelType = 'tinyYolo';
const path = modelPath[modelType];
// 统计参数
let loaded = false;
let model = {};
window.statistic = [];
const {fw, fh} = feedShape[modelType];
// 第一遍执行比较慢 所以预热一下
async function run(input) {
// const input = document.getElementById('mobilenet');
//log.start('总耗时');
const io = new IO();
// log.start('预处理');
let feed = io.process({
input: input,
params: {
gapFillWith: '#000', // 缩放后用什么填充不足方形部分
targetSize: {
height: fw,
width: fh
},
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
// shape: [3, 608, 608], // 预设tensor形状
mean: [117.001, 114.697, 97.404], // 预设期望
// std: [0.229, 0.224, 0.225] // 预设方差
}
});
// log.end('预处理');
if (!loaded) {
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
loaded = true;
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: true,
dataType: 'binary',
options: {
fileCount: 1, // 切成了多少文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
}
}
});
model = await paddle.load();
}
let inst = model.execute({
input: feed
});
// 其实这里应该有个fetch的执行调用或者fetch的输出
let result = await inst.read();
// log.end('运行耗时');
// log.end('后处理-读取数据');
console.dir(['result', result]);
//log.start('后处理-形状调整');
const newData = [];
let newIndex = -1;
const [w, h, c, b] = outputShapes[modelType].from;
// c channel
for (let i = 0; i < c; i++) {
// height channel
for (let j = 0; j < h; j++) {
// width channel
for (let k = 0; k < w; k++) {
// position: (0, 0, 0, 0)
const index = j * (c * h) + k * c + i;
// const index = j * (i * k) + k * i + i;
newData[++newIndex] = result[index];
}
}
}
// log.end('后处理-形状调整');
// log.start('后处理-画框');
testRun(newData, input);
// log.end('后处理-画框');
// log.end('后处理');
// log.end('总耗时');
}
var image = '';
function selectImage(file) {
if (!file.files || !file.files[0]) {
return;
}
let reader = new FileReader();
reader.onload = function (evt) {
let img = document.getElementById('image');
img.src = evt.target.result;
img.onload = function() {
//log.during('每次执行的时间间隔');
run(img);
};
image = evt.target.result;
}
reader.readAsDataURL(file.files[0]);
}
// selectImage
document.getElementById("uploadImg").onchange = function () {
selectImage(this);
};
/* 后处理图片 by zhangmiao06 */
let preTestRun = (index) => {
let img = document.getElementById('image');
img.src = tempPic[index];
img.onload = function() {
testRun(testOutput.data[index], img);
};
};
let testRun = (data, img) => {
// console.log('ori', data);
const {from, to} = outputShapes[modelType];
// let shape = [1, 25, 19, 19];
let shape = [].concat(from).reverse();
// 1.从一维数组到1*25*19*19
let formatData = reshapeMany({
data: data,
reshapeShape: shape
});
// console.log('一维到多维', formatData);
// 2.从1*25*19*19 到 19*19*25*1
let formatData2 = transpose({
data: formatData,
shape: shape,
transposeShape: [2, 3, 1, 0]
});
// console.log('transpose', formatData2);
// 3.从19*19*25*1到19*19*5*5
let formatData3 = reshape({
data: formatData2,
shape: from,
reshapeShape: to
});
// console.log('reshape', formatData3);
// 4.运算
let finalData = handleFinal(formatData3, shape, img);
// console.log('final', finalData);
// 5.处理画布
// handleCanvas(finalData, img);
handleDiv(finalData, img);
};
// sigmoid
let sigmoid = (x) => {
if (x < -100) {
return 0.0;
}
return 1 / (1 + Math.exp(-x));
}
// transpose
let transpose = (data) => {
let shape = data.shape;
let transposeShape = data.transposeShape;
let formatData = data.data;
let formatData2 = [];
for(let n = 0; n < shape[transposeShape[0]]; n++) {
let nData = [];
for(let c = 0; c < shape[transposeShape[1]]; c++) {
let cData = [];
for(let row = 0; row < shape[transposeShape[2]]; row++) {
let rowData = [];
for(let col = 0; col < shape[transposeShape[3]]; col++) {
let tempArr = [n, c, row, col];
let newN = n;
let newC = c;
let newW = row;
let newH = col;
transposeShape.forEach((item, index)=> {
switch(item) {
case 0:
newN = tempArr[index];
break;
case 1:
newC = tempArr[index];
break;
case 2:
newW = tempArr[index];
break;
case 3:
newH = tempArr[index];
}
});
rowData.push(formatData[newN][newC][newW][newH]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData2.push(nData);
}
return formatData2;
};
// reshape
let reshape = (data) =>{
let formatData2 = data.data;
let shape = data.shape;
let reshapeShape = data.reshapeShape;
// 1.变成一维
let tempData = reshapeOne({
data: formatData2,
shape: shape
});
// 2.变成多维
let formatData3 = reshapeMany({
data: tempData,
reshapeShape: reshapeShape
});
return formatData3;
};
// 变成一维
let reshapeOne = (data) => {
let formatData2 = data.data;
let shape = data.shape;
let tempData = [];
for(let n = 0; n < shape[0]; n++) {
for(let c = 0; c < shape[1]; c++) {
for(let row = 0; row < shape[2]; row++) {
for(let col = 0; col < shape[3]; col++) {
tempData.push(formatData2[n][c][row][col]);
}
}
}
}
return tempData;
};
// 变成多维
let reshapeMany = (data) => {
let tempData = data.data;
let reshapeShape = data.reshapeShape;
let formatData3 = [];
for(let n = 0; n < reshapeShape[0]; n++) {
let nData = [];
for(let c = 0; c < reshapeShape[1]; c++) {
let cData = [];
for(let row = 0; row < reshapeShape[2]; row++) {
let rowData = [];
for(let col = 0; col < reshapeShape[3]; col++) {
let tempN = n * reshapeShape[1] * reshapeShape[2] * reshapeShape[3];
let tempC = c * reshapeShape[2] * reshapeShape[3];
let tempRow = row * reshapeShape[3];
rowData.push(tempData[tempN + tempC + tempRow + col]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData3.push(nData);
}
return formatData3;
};
let calSize = (img) => {
let w1 = img.width;
let h1 = img.height;
let wh1 = Math.max(w1, h1);
// let factor = 608.0 / wh1;
let factor = fw / wh1;
let width = Math.round(w1 * factor);
let height = Math.round(h1 * factor);
return [w1, h1, width, height];
};
// 处理运算
let handleFinal = (formatData3, shape, img) => {
let finalData = [];
let c = shape[2];
let [w1, h1, width, height] = calSize(img);
let factorX = Math.max(width, height) / width;
let factorY = Math.max(width, height) / height;
let maxProb = 0.0;
let anchors = [[1.603231, 2.094468], [6.041143, 7.080126], [2.882459, 3.518061], [4.266906, 5.178857], [9.041765, 10.66308]];
for(let i = 0; i < shape[2]; i++) {
for(let j = 0; j < shape[3]; j++) {
for(let k = 0; k < anchors.length; k++) {
let [a1, a2, a3, a4, prob] = formatData3[i][j][k];
prob = sigmoid(prob);
if (prob > maxProb && prob >= 0.5) {
let ctx = (j + sigmoid(a1)) / c * factorX;
let cty = (i + sigmoid(a2)) / c * factorY;
let col = Math.exp(a3) * anchors[k][0] / c * factorX;
let row = Math.exp(a4) * anchors[k][1] / c * factorY;
let x = (ctx - (col / 2));
let y = (cty - (row / 2));
finalData.push([x * w1, y * h1, col * w1, row * h1, prob]);
}
}
}
}
return finalData;
};
// 处理画布
let handleCanvas = (finalData, img) => {
let myCanvas = document.getElementById('myCanvas');
let [w1, h1, width, height] = calSize(img);
myCanvas.width = w1;
myCanvas.height = h1;
let ctx = myCanvas.getContext("2d");
ctx.drawImage(img, 0, 0, w1, h1);
finalData.forEach((demoArr,index) => {
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = demoArr;
ctx.beginPath();
ctx.strokeStyle="red";
ctx.moveTo(demoLeft, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop + demoHeight);
ctx.lineTo(demoLeft, demoTop + demoHeight);
ctx.closePath();
ctx.stroke();
});
};
let handleDiv = (finalData, img) => {
if (finalData.length < 1) {
return false;
}
let myCanvas = document.getElementById('myDiv');
let maxIndex = 0;
if (finalData.length > 1) {
for(let i = 1; i < finalData.length; i++) {
if (finalData[i].prob > finalData[maxIndex].prob) {
maxIndex = i;
}
}
}
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = finalData[maxIndex];
myCanvas.style.width = demoWidth;
myCanvas.style.height = demoHeight;
myCanvas.style.left = demoLeft;
myCanvas.style.top = demoTop;
};
// preTestRun(0);
// run(document.getElementById('pic'));
<!DOCYTPE html>
<html>
<head>
<meta charset="utf-8">
<title>paddleJS demo</title>
<meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no,initial-scale=1,viewport-fit=cover">
<style>
p {
display: block;
margin-block-start: 1em;
margin-block-end: 1em;
margin-inline-start: 0px;
margin-inline-end: 0px;
}
#uploadImg {
border: solid 1px gray;
width: 100%;
padding: 10px;
background-color: #cabfbf;
color: white;
font-size: 16px;
}
p.section-head {
font-variant: small-caps;
text-transform: uppercase;
letter-spacing: 0.17em;
line-height: 1.2em;
font-weight: 500;
margin-top: 2em;
margin-bottom: 1em;
border-left: 1px solid #EF6C00;
padding-left: 24px;
color: #818181;
}
.image-wrap {
position: relative;
}
#image {
width: 100%;
}
#myDiv {
position: absolute;
border: 1px solid red;
box-sizing: border-box;
}
</style>
</head>
<body>
<div class="pdjs-example-container">
<section class="title-area">
<h1>Paddle.js: Using a pretrained tinyYolo</h1>
</section>
<section>
<p class="section-head">Description</p>
<p>
Please upload a picture with face.
</p>
</section>
<section>
<p class="section-head">Model Output</p>
<div class="image-wrap">
<img id="mobilenet" />
</div>
<p>原图片</p>
<div class="image-wrap">
<img id="image" src=""/>
<div id="myDiv"></div>
</div>
<p>画布</p>
<canvas id="myCanvas"></canvas>
<br/>
<input type="file" id="uploadImg"/>
<div id="txt"></div>
</section>
</div>
</body>
<script src="index.es6"></script>
</html>
import 'babel-polyfill';
import Runner from '../src/executor/runner';
import Camera from '../src/executor/camera';
// 调试工具
// import vConsole from 'vconsole';
// const theConsole = new vConsole();
let startBtn = document.getElementById('start');
let stopBtn = document.getElementById('stop')
const runner = new Runner({
// 用哪个模型
modelName: 'separate' // '608' | '320' | '320fused' | 'separate'
});
startBtn.disabled = true;
runner.preheat()
.then(() =>{
startBtn.disabled = false
});
const domElement = document.getElementById('video');
const myCanvas = document.getElementById('myDiv');
const videoSelect = document.getElementById('videoSelect');
let camera = new Camera({
// 用来显示摄像头图像的dom
videoDom: domElement
});
camera.getDevices().then(devices => {
if (devices.length) {
camera.run(devices[0].deviceId);
devices.forEach((element, index) => {
let option = document.createElement('option');
option.value = element.deviceId;
option.text = (index + 1);
videoSelect.appendChild(option);
});
videoSelect.onchange = () => {
camera.run(videoSelect.value);
};
}
else {
camera.run();
}
});
const handleDiv = function (data) {
myCanvas.style.width = (data ? data[0] : 0) + 'px';
myCanvas.style.height = (data ? data[0] : 0) + 'px';
myCanvas.style.left = (data ? data[2] : 0) + 'px';
myCanvas.style.top = (data ? data[3] : 0) + 'px';
}
startBtn.addEventListener('click', function () {
startBtn.disabled = true;
runner.startStream(() => camera.curVideo, handleDiv);
});
stopBtn.addEventListener('click', function () {
startBtn.disabled = false;
runner.stopStream();
});
\ No newline at end of file
<!DOCYTPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>识别摄像头里的脸</title>
<style>
body {
margin: 0;
padding: 0;
}
#myDiv {
position: fixed;
border: 1px solid red;
box-sizing: border-box;
}
#video {
background: red;
}
</style>
</head>
<body>
<video id="video">
</video>
<p>
<button id="start">开始识别</button>
<button id="stop">结束</button>
</p>
<select id="videoSelect"></select>
<p id="tips">tips</p>
<div id="myDiv"></div>
<script src="./videoDemo.es6"></script>
</body>
</html>
\ No newline at end of file
/* eslint-disable */
import 'babel-polyfill';
import Paddle from '../../src/paddle/paddle';
import IO from '../../src/feed/imageFeed';
// import Logger from '../../tools/logger';
// window.log = new Logger();
// // 统计参数
// window.badCases = [];
// 后处理测试用例
// let tempPic = [demoPic, demoPic2, demoPic3, demoPic4, demoPic5];
/**
* @file model demo 入口文件
* @author wangqun@baidu.com
*
*/
// 模型输出shape
const outputShapes = {
'608': {
from: [19, 19, 25, 1],
to: [19, 19, 5, 5]
},
'320': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
},
'320fused': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
},
'separate': {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
}
};
// 模型feed数据
const feedShape = {
'608': {
fw: 608,
fh: 608
},
'320': {
fw: 320,
fh: 320
},
'320fused': {
fw: 320,
fh: 320
},
'separate': {
fw: 320,
fh: 320
}
};
// 模型路径
const modelPath = {
'separate': 'model/tinyYolo'
};
const modelType = 'separate';
const path = modelPath[modelType];
// 统计参数
let loaded = false;
let model = {};
window.statistic = [];
const {fw, fh} = feedShape[modelType];
// 第一遍执行比较慢 所以预热一下
async function run(input) {
// const input = document.getElementById('mobilenet');
//log.start('总耗时');
const io = new IO();
// log.start('预处理');
let feed = io.process({
input: input,
params: {
gapFillWith: '#000', // 缩放后用什么填充不足方形部分
targetSize: {
height: fw,
width: fh
},
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
// shape: [3, 608, 608], // 预设tensor形状
mean: [117.001, 114.697, 97.404], // 预设期望
// std: [0.229, 0.224, 0.225] // 预设方差
}
});
// log.end('预处理');
if (!loaded) {
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
loaded = true;
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: true,
dataType: 'binary',
options: {
fileCount: 1, // 切成了多少文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
},
feed
}
});
model = await paddle.load();
}
let inst = model.execute({
input: feed
});
// 其实这里应该有个fetch的执行调用或者fetch的输出
let result = await inst.read();
// log.end('运行耗时');
// log.end('后处理-读取数据');
console.dir(['result', result]);
//log.start('后处理-形状调整');
const newData = [];
let newIndex = -1;
const [w, h, c, b] = outputShapes[modelType].from;
// c channel
for (let i = 0; i < c; i++) {
// height channel
for (let j = 0; j < h; j++) {
// width channel
for (let k = 0; k < w; k++) {
// position: (0, 0, 0, 0)
const index = j * (c * h) + k * c + i;
// const index = j * (i * k) + k * i + i;
newData[++newIndex] = result[index];
}
}
}
// log.end('后处理-形状调整');
// log.start('后处理-画框');
testRun(newData, input);
// log.end('后处理-画框');
// log.end('后处理');
// log.end('总耗时');
}
var image = '';
function selectImage(file) {
if (!file.files || !file.files[0]) {
return;
}
let reader = new FileReader();
reader.onload = function (evt) {
let img = document.getElementById('image');
img.src = evt.target.result;
img.onload = function() {
//log.during('每次执行的时间间隔');
run(img);
};
image = evt.target.result;
}
reader.readAsDataURL(file.files[0]);
}
// selectImage
document.getElementById("uploadImg").onchange = function () {
selectImage(this);
};
/* 后处理图片 by zhangmiao06 */
let preTestRun = (index) => {
let img = document.getElementById('image');
img.src = tempPic[index];
img.onload = function() {
testRun(testOutput.data[index], img);
};
};
let testRun = (data, img) => {
// console.log('ori', data);
const {from, to} = outputShapes[modelType];
// let shape = [1, 25, 19, 19];
let shape = [].concat(from).reverse();
// 1.从一维数组到1*25*19*19
let formatData = reshapeMany({
data: data,
reshapeShape: shape
});
// console.log('一维到多维', formatData);
// 2.从1*25*19*19 到 19*19*25*1
let formatData2 = transpose({
data: formatData,
shape: shape,
transposeShape: [2, 3, 1, 0]
});
// console.log('transpose', formatData2);
// 3.从19*19*25*1到19*19*5*5
let formatData3 = reshape({
data: formatData2,
shape: from,
reshapeShape: to
});
// console.log('reshape', formatData3);
// 4.运算
let finalData = handleFinal(formatData3, shape, img);
// console.log('final', finalData);
// 5.处理画布
// handleCanvas(finalData, img);
handleDiv(finalData, img);
};
// sigmoid
let sigmoid = (x) => {
if (x < -100) {
return 0.0;
}
return 1 / (1 + Math.exp(-x));
}
// transpose
let transpose = (data) => {
let shape = data.shape;
let transposeShape = data.transposeShape;
let formatData = data.data;
let formatData2 = [];
for(let n = 0; n < shape[transposeShape[0]]; n++) {
let nData = [];
for(let c = 0; c < shape[transposeShape[1]]; c++) {
let cData = [];
for(let row = 0; row < shape[transposeShape[2]]; row++) {
let rowData = [];
for(let col = 0; col < shape[transposeShape[3]]; col++) {
let tempArr = [n, c, row, col];
let newN = n;
let newC = c;
let newW = row;
let newH = col;
transposeShape.forEach((item, index)=> {
switch(item) {
case 0:
newN = tempArr[index];
break;
case 1:
newC = tempArr[index];
break;
case 2:
newW = tempArr[index];
break;
case 3:
newH = tempArr[index];
}
});
rowData.push(formatData[newN][newC][newW][newH]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData2.push(nData);
}
return formatData2;
};
// reshape
let reshape = (data) =>{
let formatData2 = data.data;
let shape = data.shape;
let reshapeShape = data.reshapeShape;
// 1.变成一维
let tempData = reshapeOne({
data: formatData2,
shape: shape
});
// 2.变成多维
let formatData3 = reshapeMany({
data: tempData,
reshapeShape: reshapeShape
});
return formatData3;
};
// 变成一维
let reshapeOne = (data) => {
let formatData2 = data.data;
let shape = data.shape;
let tempData = [];
for(let n = 0; n < shape[0]; n++) {
for(let c = 0; c < shape[1]; c++) {
for(let row = 0; row < shape[2]; row++) {
for(let col = 0; col < shape[3]; col++) {
tempData.push(formatData2[n][c][row][col]);
}
}
}
}
return tempData;
};
// 变成多维
let reshapeMany = (data) => {
let tempData = data.data;
let reshapeShape = data.reshapeShape;
let formatData3 = [];
for(let n = 0; n < reshapeShape[0]; n++) {
let nData = [];
for(let c = 0; c < reshapeShape[1]; c++) {
let cData = [];
for(let row = 0; row < reshapeShape[2]; row++) {
let rowData = [];
for(let col = 0; col < reshapeShape[3]; col++) {
let tempN = n * reshapeShape[1] * reshapeShape[2] * reshapeShape[3];
let tempC = c * reshapeShape[2] * reshapeShape[3];
let tempRow = row * reshapeShape[3];
rowData.push(tempData[tempN + tempC + tempRow + col]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData3.push(nData);
}
return formatData3;
};
let calSize = (img) => {
let w1 = img.width;
let h1 = img.height;
let wh1 = Math.max(w1, h1);
// let factor = 608.0 / wh1;
let factor = fw / wh1;
let width = Math.round(w1 * factor);
let height = Math.round(h1 * factor);
return [w1, h1, width, height];
};
// 处理运算
let handleFinal = (formatData3, shape, img) => {
let finalData = [];
let c = shape[2];
let [w1, h1, width, height] = calSize(img);
let factorX = Math.max(width, height) / width;
let factorY = Math.max(width, height) / height;
let maxProb = 0.0;
let anchors = [[1.603231, 2.094468], [6.041143, 7.080126], [2.882459, 3.518061], [4.266906, 5.178857], [9.041765, 10.66308]];
for(let i = 0; i < shape[2]; i++) {
for(let j = 0; j < shape[3]; j++) {
for(let k = 0; k < anchors.length; k++) {
let [a1, a2, a3, a4, prob] = formatData3[i][j][k];
prob = sigmoid(prob);
if (prob > maxProb && prob >= 0.5) {
let ctx = (j + sigmoid(a1)) / c * factorX;
let cty = (i + sigmoid(a2)) / c * factorY;
let col = Math.exp(a3) * anchors[k][0] / c * factorX;
let row = Math.exp(a4) * anchors[k][1] / c * factorY;
let x = (ctx - (col / 2));
let y = (cty - (row / 2));
finalData.push([x * w1, y * h1, col * w1, row * h1, prob]);
}
}
}
}
return finalData;
};
// 处理画布
let handleCanvas = (finalData, img) => {
let myCanvas = document.getElementById('myCanvas');
let [w1, h1, width, height] = calSize(img);
myCanvas.width = w1;
myCanvas.height = h1;
let ctx = myCanvas.getContext("2d");
ctx.drawImage(img, 0, 0, w1, h1);
finalData.forEach((demoArr,index) => {
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = demoArr;
ctx.beginPath();
ctx.strokeStyle="red";
ctx.moveTo(demoLeft, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop + demoHeight);
ctx.lineTo(demoLeft, demoTop + demoHeight);
ctx.closePath();
ctx.stroke();
});
};
let handleDiv = (finalData, img) => {
if (finalData.length < 1) {
return false;
}
let myCanvas = document.getElementById('myDiv');
let maxIndex = 0;
if (finalData.length > 1) {
for(let i = 1; i < finalData.length; i++) {
if (finalData[i].prob > finalData[maxIndex].prob) {
maxIndex = i;
}
}
}
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = finalData[maxIndex];
myCanvas.style.width = demoWidth;
myCanvas.style.height = demoHeight;
myCanvas.style.left = demoLeft;
myCanvas.style.top = demoTop;
};
// preTestRun(0);
// run(document.getElementById('pic'));
/* eslint-enable */
\ No newline at end of file
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>paddle web demo</title>
<meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no">
<style>
.image-wrap {
position: relative;
}
#myDiv {
position: absolute;
border: 1px solid #f71111;
box-sizing: border-box;
}
</style>
</head>
<body>
<div class="image-wrap">
<img id="mobilenet">
</div>
<p>原图片</p>
<div class="image-wrap">
<img id="image" src="pic.png">
<div id="myDiv"></div>
</div>
<p>画布</p>
<canvas id="myCanvas"></canvas>
<input type="file" id="uploadImg">
<div id="txt"></div>
<script src="index.es6"></script>
</body>
</html>
{
"name": "paddle-web-demo",
"version": "1.0.0",
"description": "paddle",
"main": "index.js",
"scripts": {
"mnistdemo": "parcel ./examples/mnist/index.html",
"mobilenet": "parcel ./examples/mobileNet/index.html",
"tinyYolo": "parcel ./examples/tinyYolo/index.html",
"huangfan": "parcel ./examples/huangfan/index.html",
"yolo": "parcel ./examples/yolo/index.html",
"videoDemo": "parcel ./examples/videoDemo.html --port 8123 --https",
"unitTest": "parcel ./test/unitTest.html",
"test": "echo \"Error: no test specified\" && exit 1"
},
"devDependencies": {
"@babel/core": "^7.7.2",
"@babel/preset-env": "^7.7.1",
"axios": "^0.17.1",
"babel-core": "^6.26.3",
"babel-loader": "^8.0.6",
"babel-plugin-transform-class-properties": "^6.24.1",
"babel-plugin-transform-decorators-legacy": "^1.3.5",
"babel-plugin-transform-runtime": "^6.23.0",
"babel-polyfill": "^6.26.0",
"babel-preset-env": "^1.7.0",
"babel-preset-react": "^6.24.1",
"babel-preset-stage-0": "^6.24.1",
"babel-runtime": "^6.26.0",
"parcel-bundler": "^1.10.3",
"webpack-cli": "^3.3.6"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"js-file-download": "^0.4.5",
"vconsole": "^3.3.2"
}
}
export PATH=$NODEJS_BIN_LATEST:$PATH
echo "node: $(node -v)"
echo "npm: v$(npm -v)"
npm install
npm run build
/**
* @file 视频流类
* @author zhangmiao06
*/
import $ from 'webpack-zepto';
export default class Camera {
constructor(option) {
this.option = option;
this.video = option.videoDom;
// 标志是否可以切换摄像头
this.haveDevice = false;
// 设置视频流宽度
if (option.width) {
this.video.width = option.width;
}
else if (option.height) {
this.video.height = option.height;
}
else {
this.video.width = window.innerWidth;
}
this.deviceInfos = [];
if(navigator.mediaDevices) {
this.haveDevice = true;
}
}
// 访问用户媒体设备的兼容方法
run(deviceId, callback) {
if (window.stream) {
window.stream.getTracks().forEach(function (track) {
track.stop();
});
}
let constraints = {
video: {}
};
const success = stream => {
this.success(stream, callback);
};
const error = this.error.bind(this);
if (this.deviceInfos.length) {
constraints.video.deviceId= {exact: deviceId || this.deviceInfos[0]};
}
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
// 最新的标准API
navigator.mediaDevices.getUserMedia(constraints).then(success).catch(error);
}
else if (navigator.webkitGetUserMedia) {
// webkit核心浏览器
navigator.webkitGetUserMedia(constraints, success, error);
}
else if (navigator.mozGetUserMedia) {
// firfox浏览器
navigator.mozGetUserMedia(constraints, success, error);
}
else if (navigator.getUserMedia) {
// 旧版API
navigator.getUserMedia(constraints, success, error);
}
else {
console.log('您的浏览器不支持获取视频流~');
}
}
success(stream, callback) {
const domElement = this.video;
// make stream available to console
window.stream = stream;
// 旧的浏览器可能没有srcObject
const URL = window.URL || window.webkitURL || window.mozURL || window.msURL;
if ('srcObject' in domElement) {
try {
domElement.srcObject = stream;
} catch (error) {
domElement.src = URL.createObjectURL(stream) || stream;
}
} else {
// 防止再新的浏览器里使用它,应为它已经不再支持了
domElement.src = URL.createObjectURL(stream) || stream;
}
domElement.addEventListener('loadeddata', () => {
// 设置视频流高度
if (this.option.height) {
domElement.width = $(domElement).width();
}
else {
domElement.height = $(domElement).height();
}
domElement.play();
callback && callback();
}, false);
}
error(error) {
alert(`访问用户媒体设备失败${error.name}, ${error.message}`);
}
// 处理摄像头列表
gotDevices(deviceInfos) {
const ua = navigator.userAgent;
const isIos = /iphone|ipod|ipad/ig.test(ua);
let delt = -1;
const range = deviceInfos.length;
let start = range - 1;
let end = - 1;
// ios机型camare顺序相反
if (isIos) {
delt = 1;
start = 0;
end = range;
}
for (let i = start; i !== end; i += delt) {
const deviceInfo = deviceInfos[i];
if (deviceInfo.kind === 'videoinput') {
this.deviceInfos.push(deviceInfos[i]);
}
}
}
get curVideo() {
return this.video;
}
getDevices() {
return new Promise((resolve, reject)=> {
if (this.haveDevice) {
if (this.deviceInfos.length) {
resolve(this.deviceInfos);
}
else {
navigator.mediaDevices.enumerateDevices()
.then(this.gotDevices.bind(this))
.then(()=> {
resolve(this.deviceInfos);
});
}
}
else {
resolve([]);
}
});
}
}
/* eslint-disable */
/**
* @file GraphExecutor,封装可执行单元
* @author wangqun@baidu.com
*/
// const fileDownload = require('js-file-download');
let start;
export default class GraphExecutor {
constructor(model) {
this.inputs = model.inputs;
this.outputs = model.outputs;
this.attrs = model.attrs || model['sub-attrs'];
this.type = model.type;
this.finish = false;
this.next = null;
this.opData = null;
this.id = +new Date() + model.type + Math.floor(Math.random() * 10 + 1) + model.idx;
}
get inputsName() {
if (this.type === 'feed') {
return this.inputs.X;
}
else if (this.type === 'batchnorm' || this.type === 'batch_norm') {
return this.inputs.X;
}
else if (this.type === 'conv2d') {
return this.inputs.Input;
}
else if (this.type === 'depthwise_conv2d') {
return this.inputs.Input;
}
else if (this.type === 'elementwise_add') {
return this.inputs.X;
}
else if (this.type === 'relu' || this.type === 'leaky_relu') {
return this.inputs.X;
}
else if (this.type === 'pool2d') {
return this.inputs.X;
}
else if (this.type === 'mul') {
return this.inputs.X;
}
else if (this.type === 'softmax') {
return this.inputs.X;
}
else if (this.type === 'scale') {
return this.inputs.X;
}
else if (this.type === 'fetch') {
return this.inputs.X;
}
return this.inputs.Input || this.inputs.X;
}
get outputsName() {
if (this.type === 'conv2d') {
return this.outputs.Output;
}
else if (this.type === 'depthwise_conv2d') {
return this.outputs.Output;
}
else if (this.type === 'batchnorm' || this.type === 'batch_norm') {
this.outputs.out = this.outputs.Y;
return this.outputs.Y;
}
else {
return this.outputs.Out || this.outputs.Output;
}
}
/**
* 将输入数据和具体op进行关联,触发执行具体每一个op
* @param runtime
* @param isRendered
*/
execute(runtime, isRendered) {
// console.log(inputs, outputs);
if (this.type !== 'feed') {
// let time = +Date.now();
// log.start(this.opData.iLayer + '-' + this.type);
console.log(this.type, this.opData);
runtime.run(this.type, this.opData, isRendered);
// log.end(this.opData.iLayer + '-' + this.type);
// if (runtime.gpu.frameBufferIsComplete().isComplete) {
// var result = runtime.read();
// let res = Array.prototype.slice.call(result);
// fileDownload(res, "result.csv");
// }
// let length = statistic.length;
// statistic[length - 1].type = this.type;
// statistic[length - 1].runTime = +Date.now() - time;
// if (this.type === 'scale') {
// console.log('时间是:' + (+Date.now() - start));
// }
} else {
start = +Date.now();
}
}
}
/* eslint-enable */
/* eslint-disable */
/* 后处理图片 by zhangmiao06 */
// let preTestRun = index => {
// let img = document.getElementById('image');
// img.src = tempPic[index];
// img.onload = function () {
// testRun(testOutput.data[index], img);
// };
// };
import models from '../utils/models';
const isSimilar = (r1, r2, threshold = 5) => {
return Math.max(Math.abs(r1[0] - r2[0]), Math.abs(r1[1] - r2[1])) < threshold;
// return Math.abs((r1[0] + r1[1] + r1[2] + r1[3]) - (r2[0] + r2[1] + r2[2] + r2[3])) < threshold;
}
// sigmoid
let sigmoid = (x) => {
if (x < -100) {
return 0.0;
}
return 1 / (1 + Math.exp(-x));
};
// transpose
let transpose = (data) => {
let shape = data.shape;
let transposeShape = data.transposeShape;
let formatData = data.data;
let formatData2 = [];
for (let n = 0; n < shape[transposeShape[0]]; n++) {
let nData = [];
for (let c = 0; c < shape[transposeShape[1]]; c++) {
let cData = [];
for (let row = 0; row < shape[transposeShape[2]]; row++) {
let rowData = [];
for (let col = 0; col < shape[transposeShape[3]]; col++) {
let tempArr = [n, c, row, col];
let newN = n;
let newC = c;
let newW = row;
let newH = col;
transposeShape.forEach((item, index) => {
switch (item) {
case 0:
newN = tempArr[index];
break;
case 1:
newC = tempArr[index];
break;
case 2:
newW = tempArr[index];
break;
case 3:
newH = tempArr[index];
}
});
rowData.push(formatData[newN][newC][newW][newH]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData2.push(nData);
}
return formatData2;
};
// reshape
const reshape = (data) => {
let formatData2 = data.data;
let shape = data.shape;
let reshapeShape = data.reshapeShape;
// 1.变成一维
let tempData = reshapeOne({
data: formatData2,
shape: shape
});
// 2.变成多维
let formatData3 = reshapeMany({
data: tempData,
reshapeShape: reshapeShape
});
return formatData3;
};
// 变成一维
const reshapeOne = (data) => {
let formatData2 = data.data;
let shape = data.shape;
let tempData = [];
for (let n = 0; n < shape[0]; n++) {
for (let c = 0; c < shape[1]; c++) {
for (let row = 0; row < shape[2]; row++) {
for (let col = 0; col < shape[3]; col++) {
tempData.push(formatData2[n][c][row][col]);
}
}
}
}
return tempData;
};
// 变成多维
const reshapeMany = data => {
let tempData = data.data;
let reshapeShape = data.reshapeShape;
let formatData3 = [];
for (let n = 0; n < reshapeShape[0]; n++) {
let nData = [];
for (let c = 0; c < reshapeShape[1]; c++) {
let cData = [];
for (let row = 0; row < reshapeShape[2]; row++) {
let rowData = [];
for (let col = 0; col < reshapeShape[3]; col++) {
let tempN = n * reshapeShape[1] * reshapeShape[2] * reshapeShape[3];
let tempC = c * reshapeShape[2] * reshapeShape[3];
let tempRow = row * reshapeShape[3];
rowData.push(tempData[tempN + tempC + tempRow + col]);
}
cData.push(rowData);
}
nData.push(cData);
}
formatData3.push(nData);
}
return formatData3;
};
export default class PostProcess {
constructor(options) {
this.modelConfig = models[options.modelName];
this.count = 0;
this.lastRect = [0, 0, 0, 0]
}
run(data, img, callback, canavs) {
let {from, to} = this.modelConfig.outputShapes;
let shape = [].concat(from).reverse();
// 1.从一维数组到1*25*19*19
let formatData = reshapeMany({
data: data,
reshapeShape: shape
});
// console.log('一维到多维', formatData);
// 2.从1*25*19*19 到 19*19*25*1
let formatData2 = transpose({
data: formatData,
shape: shape,
transposeShape: [2, 3, 1, 0]
});
// console.log('transpose', formatData2);
// 3.从19*19*25*1到19*19*5*5
let formatData3 = reshape({
data: formatData2,
// shape: [19, 19, 25, 1],
// reshapeShape: [19, 19, 5, 5]
shape: from,
reshapeShape: to
});
// console.log('reshape', formatData3);
// 4.运算
let finalData = this.handleFinal(formatData3, shape, img);
// console.log('final', finalData);
// 5.处理画布
// finalData.length && handleCanvas(finalData, img);
this.handleDiv(finalData, img, callback, canavs);
}
calSize(img) {
let w1 = img.width;
let h1 = img.height;
let wh1 = Math.max(w1, h1);
let factor = this.modelConfig.feedShape.fw / wh1;
// let factor = 608.0 / wh1;
let width = Math.round(w1 * factor);
let height = Math.round(h1 * factor);
return [w1, h1, width, height];
}
// 处理运算
handleFinal(formatData3, shape, img) {
let finalData = [];
let c = shape[2];
let [w1, h1, width, height] = this.calSize(img);
let factorX = Math.max(width, height) / width;
let factorY = Math.max(width, height) / height;
let maxProb = 0.0;
let anchors = [[1.603231, 2.094468], [6.041143, 7.080126], [2.882459, 3.518061], [4.266906, 5.178857], [9.041765, 10.66308]];
for (let i = 0; i < shape[2]; i++) {
for (let j = 0; j < shape[3]; j++) {
for (let k = 0; k < anchors.length; k++) {
let [a1, a2, a3, a4, prob] = formatData3[i][j][k];
prob = sigmoid(prob);
if (prob > maxProb && prob >= 0.5) {
let ctx = (j + sigmoid(a1)) / c * factorX;
let cty = (i + sigmoid(a2)) / c * factorY;
let col = Math.exp(a3) * anchors[k][0] / c * factorX;
let row = Math.exp(a4) * anchors[k][1] / c * factorY;
let x = (ctx - (col / 2));
let y = (cty - (row / 2));
finalData.push([x * w1, y * h1, col * w1, row * h1, prob]);
}
}
}
}
return finalData;
}
handleDiv(finalData, img, callback, canavs) {
if (finalData.length < 1) {
callback();
return false;
}
let maxIndex = 0;
if (finalData.length > 1) {
for (let i = 1; i < finalData.length; i++) {
if (finalData[i].prob > finalData[maxIndex].prob) {
maxIndex = i;
}
}
}
let [demoLeft, demoTop, demoWidth, demoHeight] = finalData[maxIndex];
if (!isSimilar(this.lastRect, [demoLeft, demoTop, demoWidth, demoHeight])) {
callback([demoWidth, demoHeight,demoLeft, demoTop], canavs);
};
this.lastRect = [demoLeft, demoTop, demoWidth, demoHeight];
}
// 处理画布
handleCanvas(finalData, img) {
let myCanvas = document.getElementById('myCanvas');
let [w1, h1, width, height] = calSize(img);
myCanvas.width = w1;
myCanvas.height = h1;
let ctx = myCanvas.getContext('2d');
// ctx.drawImage(img, 0, 0, w1, h1);
// finalData.forEach((demoArr, index) => {
// let [demoLeft, demoTop, demoWidth, demoHeight, prob] = demoArr;
let [demoLeft, demoTop, demoWidth, demoHeight, prob] = finalData[0];
ctx.beginPath();
ctx.lineWidth = 4;
ctx.strokeStyle = 'red';
ctx.moveTo(demoLeft, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop);
ctx.lineTo(demoLeft + demoWidth, demoTop + demoHeight);
ctx.lineTo(demoLeft, demoTop + demoHeight);
ctx.closePath();
ctx.stroke();
// });
}
}
/**
* @file Runner 整个流程封装一下
* @author hantian(hantianjiao@baidu.com)
* 使用方法:
* const runner = new Runner({
* modelName: 'separate' // '608' | '320' | '320fused' | 'separate'
* });
* runner.preheat().then(r => {
* r.run(document.getElementById('test'));
* });
*/
import IO from '../feed/ImageFeed';
import DataFeed from '../feed/dataFeed';
import Graph from './loader';
import PostProcess from './postProcess';
import models from '../utils/models';
import Logger from '../../tools/logger';
window.log = new Logger();
export default class Runner {
// 加载模型&预热
constructor(options) {
this.modelConfig = models[options.modelName];
this.flags = {
isRunning: false,
isPreheating: false,
runVideoPaused: false
};
this.buffer = new Float32Array();
this.io = new IO();
this.postProcess = new PostProcess(options);
}
// 预热 用用空数据跑一遍
async preheat() {
this.flags.isPreheating = true;
let {fh, fw} = this.modelConfig.feedShape;
let path = this.modelConfig.modelPath;
let feed = [{
data: new Float32Array(3 * fh * fw),
name: 'image',
shape: [1, 3, fh, fw]
}];
const MODEL_URL = `/${path}/model.json`;
let dir = `https://mms-graph.cdn.bcebos.com/activity/facegame/paddle/${path}/`;
if (location.href.indexOf('test=1') > -1) {
dir = `/src/view/common/lib/paddle/${path}/`;
}
const MODEL_CONFIG = {
dir: dir,
main: 'model.json' // 主文件
};
const graphModel = new Graph();
this.model = await graphModel.loadGraphModel(MODEL_CONFIG, {
multipart: true,
dataType: 'binary',
binaryOption: {
fileCount: 1, // 切成了多少文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
},
feed
});
this.model.execute({
input: feed
});
this.flags.isPreheating = false;
return this;
}
// 跑一遍
async run(input, callback) {
this.flags.isRunning = true;
let {fh, fw} = this.modelConfig.feedShape;
let path = this.modelConfig.modelPath;
if (!this.model) {
console.warn('It\'s better to preheat the model before running.');
await this.preheat();
}
// log.start('总耗时'); // eslint-disable-line
// log.start('预处理'); // eslint-disable-line
let feed;
if (typeof input === 'string') {
const dfIO = new DataFeed();
feed = await dfIO.process({
input: `/${path}/${input}`,
shape: [1, 3, fh, fw]
});
}
else {
feed = this.io.process({
input: input,
params: {
gapFillWith: '#000', // 缩放后用什么填充不足方形部分
targetSize: {
height: fw,
width: fh
},
targetShape: [1, 3, fh, fw], // 目标形状 为了兼容之前的逻辑所以改个名
// shape: [3, 608, 608], // 预设tensor形状
mean: [117.001, 114.697, 97.404] // 预设期望
// std: [0.229, 0.224, 0.225] // 预设方差
}
});
}
// log.end('预处理'); // eslint-disable-line
// log.start('运行耗时'); // eslint-disable-line
let inst = this.model.execute({
input: feed
});
let result = await inst.read();
// log.end('后处理-读取数据'); // eslint-disable-line
const newData = [];
let newIndex = -1;
const [w, h, c, b] = this.modelConfig.outputShapes.from;
// c channel
for (let i = 0; i < c; i++) {
// height channel
for (let j = 0; j < h; j++) {
// width channel
for (let k = 0; k < w; k++) {
// position: (0, 0, 0, 0)
const index = j * (c * h) + k * c + i;
// const index = j * (i * k) + k * i + i;
newData[++newIndex] = result[index];
}
}
}
this.postProcess.run(newData, input, callback, feed[0].canvas);
// log.end('后处理'); // eslint-disable-line
this.flags.isRunning = false;
// log.end('总耗时'); // eslint-disable-line
}
// 传入获取图片的function
async runStream(getMedia, callback) {
await this.run(getMedia, callback);
if (!this.flags.runVideoPaused) {
setTimeout(async () => {
await this.runStream(getMedia, callback);
}, 0);
}
}
stopStream() {
this.flags.runVideoPaused = true;
}
startStream(getMedia, callback) {
this.flags.runVideoPaused = false;
this.runStream(getMedia, callback);
}
}
[中文版](./README_cn.md)
# PaddleJS Operators Support Table
Operators represent the operators corresponding to each layer of the neural network. Refer to the specific algorithm implementation, the table shows the support of Baidu artificial intelligence operators. Padderjs currently supports GPU operation calculation version.
See Compatibility for a list of the supported platforms.
Please refer to compatibility for the list supported by paddle.js. This file will change as the number of operators increases and the support situation changes.
Baidu paddlejs uses the ready-made JavaScript model or transforms the paddle model to run in the browser.
## Demonstration
| Operator | Gpu Backend | desc |
| ------------- | ------------- | ------------- |
| conv2d_transpose | webGL1、 webGL2 | |
| conv2d | webGL1、 webGL2 | |
| conv2d_depthwise | webGL1、 webGL2 | |
| conv2d_elementwise_add | webGL1、 webGL2 | |
| conv2d_elementwise_add_winograd | webGL1、 webGL2 | |
| dynamic | webGL1、 webGL2 | |
| scale | webGL1、 webGL2 | |
| pool2d | webGL1、 webGL2 | |
| pool2d_max | webGL1、 webGL2 | |
| pool2d_winograd | webGL1、 webGL2 | |
| elementwise_add | webGL1、 webGL2 | |
| mul | webGL1、 webGL2 | |
| relu | webGL1、 webGL2 | |
| relu6 | webGL1、 webGL2 | |
| softmax | webGL1、 webGL2 | |
| batchnorm | webGL1、 webGL2 | |
| reshape | webGL1、 webGL2 | |
| transpose | webGL1、 webGL2 | |
## Browser coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
# PaddleJS Operators 支持表
Operators表示神经网络每层对应的算子,参考具体的算法实现,表格显示了百度人工智能算子支持情况,PadderJS目前支持GPU操作计算版本。
受paddle.js支持的列表,请参阅兼容性,此文件会随着Operator数量增加和支持情况做相应的变更。
## 演示
| Operator | Gpu Backend | desc |
| ------------- | ------------- | ------------- |
| conv2d_transpose | webGL1、 webGL2 | |
| conv2d | webGL1、 webGL2 | |
| conv2d_depthwise | webGL1、 webGL2 | |
| conv2d_elementwise_add | webGL1、 webGL2 | |
| conv2d_elementwise_add_winograd | webGL1、 webGL2 | |
| dynamic | webGL1、 webGL2 | |
| scale | webGL1、 webGL2 | |
| pool2d | webGL1、 webGL2 | |
| pool2d_max | webGL1、 webGL2 | |
| pool2d_winograd | webGL1、 webGL2 | |
| elementwise_add | webGL1、 webGL2 | |
| mul | webGL1、 webGL2 | |
| relu | webGL1、 webGL2 | |
| relu6 | webGL1、 webGL2 | |
| softmax | webGL1、 webGL2 | |
| batchnorm | webGL1、 webGL2 | |
| reshape | webGL1、 webGL2 | |
| transpose | webGL1、 webGL2 | |
## 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
import ops from './ops';
/**
* @file 工厂类,生成fragment shader
* @author wangqun
*/
export default class Factory {
constructor(opts) {
this.defaultOpts = Object.assign({}, opts);
this.webglVersion = 2;
this.texture2d = 'texture';
}
setWebglVersion(vs = 0) {
this.webglVersion = vs;
if (vs === 1) {
this.texture2d = 'texture2D';
}
}
buildShader(opName, data) {
let result = '';
result = this.buildPrefix(opName);
result += this.buildCommon(opName);
result += this.buildOp(opName);
data.texture2d = this.texture2d;
result = this.populateData(result, data);
return result;
}
buildPrefix(opName) {
if (this.webglVersion === 1) {
return ops.common.prefix;
}
return ops.common.prefix2;
}
buildCommon(opName) {
return ops.common.params + ops.common.func;
}
buildOp(opName) {
let code = ops.ops[opName].params;
// 依赖的方法
let atoms = ops.atoms;
let confs = ops.ops[opName].confs;
let dep = confs.dep || [];
dep.map(item => {
let func = item.func;
let data = item.conf;
let snippet = atoms[func];
code += this.populateData(snippet, data);
});
// suffix
code += this.buildSuffix(opName);
// main方法
code += ops.ops[opName].func;
return code;
}
buildSuffix(opName) {
return ops.common.suffix;
}
populateData(result, data) {
let code = result;
for (let key in data) {
code = code.replace(new RegExp(key.toUpperCase(), 'g'),
((typeof data[key]) === 'undefined') ? 1 : data[key]);
}
return code;
}
getOpConfs() {
const opsConfs = {};
for (let key in ops.ops) {
if (ops.ops.hasOwnProperty(key)) {
opsConfs[key] = ops.ops[key].confs.input;
}
}
return opsConfs;
}
}
/* eslint-disable */
import common_params from '../../shader/atom/common_params';
import common_func from '../../shader/atom/common_func';
import prefix from '../../shader/atom/prefix';
import prefix2 from '../../shader/atom/prefix2';
import suffix from '../../shader/atom/suffix';
import ivec56 from '../../shader/atom/type_ivec56';
import conv2d_params from '../../shader/conv2d/params';
import conv2d_func from '../../shader/conv2d/main';
import conv2d_conf from '../../shader/conv2d/conf';
import conv2d_depthwise_params from '../../shader/conv2d_depthwise/params';
import conv2d_depthwise_func from '../../shader/conv2d_depthwise/main';
import conv2d_depthwise_conf from '../../shader/conv2d_depthwise/conf';
import dynamic_params from '../../shader/dynamic/params';
import dynamic_func from '../../shader/dynamic/main';
import dynamic_conf from '../../shader/dynamic/conf';
import pool2d_params from '../../shader/pool2d/params';
import pool2d_func from '../../shader/pool2d/main';
import pool2d_conf from '../../shader/pool2d/conf';
import pool2d_max_params from '../../shader/pool2d_max/params';
import pool2d_max_func from '../../shader/pool2d_max/main';
import pool2d_max_conf from '../../shader/pool2d_max/conf';
import pool2d_winograd_params from '../../shader/pool2d_winograd/params';
import pool2d_winograd_func from '../../shader/pool2d_winograd/main';
import pool2d_winograd_conf from '../../shader/pool2d_winograd/conf';
import elementwise_add_params from '../../shader/elementwise_add/params';
import elementwise_add_func from '../../shader/elementwise_add/main';
import elementwise_add_conf from '../../shader/elementwise_add/conf';
import mul_params from '../../shader/mul/params';
import mul_func from '../../shader/mul/main';
import mul_conf from '../../shader/mul/conf';
import softmax_params from '../../shader/softmax/params';
import softmax_func from '../../shader/softmax/main';
import softmax_conf from '../../shader/softmax/conf';
import batchnorm_params from '../../shader/batchnorm/params';
import batchnorm_func from '../../shader/batchnorm/main';
import batchnorm_conf from '../../shader/batchnorm/conf';
import conv2d_elementwise_add_params from '../../shader/conv2d_elementwise_add/params';
import conv2d_elementwise_add_func from '../../shader/conv2d_elementwise_add/main';
import conv2d_elementwise_add_conf from '../../shader/conv2d_elementwise_add/conf';
import conv2d_elementwise_add_winograd_params from '../../shader/conv2d_elementwise_add_winograd/params';
import conv2d_elementwise_add_winograd_func from '../../shader/conv2d_elementwise_add_winograd/main';
import conv2d_elementwise_add_winograd_conf from '../../shader/conv2d_elementwise_add_winograd/conf';
import getArrayIndexFromTensorPos from '../../shader/atom/getArrayIndexFromTensorPos';
import getArrayIndexFromTexturePos from '../../shader/atom/getArrayIndexFromTexturePos';
import getTensorPosFromArrayIndex from '../../shader/atom/getTensorPosFromArrayIndex';
import getTexturePosFromArrayIndex from '../../shader/atom/getTexturePosFromArrayIndex';
import getValueFromTexturePos from '../../shader/atom/getValueFromTexturePos';
import getValueFromTensorPos from '../../shader/atom/getValueFromTensorPos';
import getValueFromTensorPosPacked from '../../shader/atom/getValueFromTensorPosPacked';
import moveTexture2PosToReal from '../../shader/atom/moveTexture2PosToReal';
import getPixelsFromTexturePos from '../../shader/atom/getPixelsFromTexturePos';
import getRangePowSumFromArrayIndex from '../../shader/atom/getRangePowSumFromArrayIndex';
import getRangeSumFromArrayIndex from '../../shader/atom/getRangeSumFromArrayIndex';
import sigmoid from '../../shader/atom/sigmoid';
import prelu from '../../shader/atom/prelu';
import scale from '../../shader/atom/scale';
import softmax from '../../shader/atom/softmax';
/**
* @file op文件
* @author yangmingming
*/
export default {
common: {
params: common_params,
func: common_func,
prefix,
prefix2,
suffix,
ivec56
},
ops: {
conv2d: {
params: conv2d_params,
func: conv2d_func,
confs: conv2d_conf
},
conv2d_depthwise: {
params: conv2d_depthwise_params,
func: conv2d_depthwise_func,
confs: conv2d_depthwise_conf
},
conv2d_elementwise_add: {
params: conv2d_elementwise_add_params,
func: conv2d_elementwise_add_func,
confs: conv2d_elementwise_add_conf
},
conv2d_elementwise_add_winograd: {
params: conv2d_elementwise_add_winograd_params,
func: conv2d_elementwise_add_winograd_func,
confs: conv2d_elementwise_add_winograd_conf
},
dynamic: {
params: dynamic_params,
func: dynamic_func,
confs: dynamic_conf
},
pool2d: {
params: pool2d_params,
func: pool2d_func,
confs: pool2d_conf
},
pool2d_max: {
params: pool2d_max_params,
func: pool2d_max_func,
confs: pool2d_max_conf
},
pool2d_winograd: {
params: pool2d_winograd_params,
func: pool2d_winograd_func,
confs: pool2d_winograd_conf
},
elementwise_add: {
params: elementwise_add_params,
func: elementwise_add_func,
confs: elementwise_add_conf
},
mul: {
params: mul_params,
func: mul_func,
confs: mul_conf
},
relu: {
params: dynamic_params,
func: dynamic_func,
confs: dynamic_conf
},
relu6: {
params: dynamic_params,
func: dynamic_func,
confs: dynamic_conf
},
scale: {
params: dynamic_params,
func: dynamic_func,
confs: dynamic_conf
},
softmax: {
params: softmax_params,
func: softmax_func,
confs: softmax_conf
},
batchnorm: {
params: batchnorm_params,
func: batchnorm_func,
confs: batchnorm_conf
}
},
atoms: {
getArrayIndexFromTensorPos,
getArrayIndexFromTexturePos,
getTensorPosFromArrayIndex,
getTexturePosFromArrayIndex,
getValueFromTexturePos,
getValueFromTensorPos,
getValueFromTensorPosPacked,
moveTexture2PosToReal,
getPixelsFromTexturePos,
getRangeSumFromArrayIndex,
getRangePowSumFromArrayIndex,
sigmoid,
prelu,
scale,
softmax
}
};
/* eslint-disable */
/**
* @file image,feed 获取图像相关输入
* @author wangqun@baidu.com
*/
export default class imageFeed {
constructor() {
this.fromPixels2DContext = document.createElement('canvas').getContext('2d');
this.fromPixels2DContext2 = document.createElement('canvas').getContext('2d');
this.defaultWidth = 224;
this.defaultHeight = 224;
this.minPixels = 225;
this.pixels = '';
this.defaultParams = {
gapFillWith: '#000',
std: [1, 1, 1]
};
};
/**
* 处理图像方法
* @param inputs
*/
process(inputs) {
const input = inputs.input;
const mode = inputs.mode;
const channel = inputs.channel;
const rotate = inputs.rotate;
const params = {
...this.defaultParams,
...inputs.params
};
let output = [];
if (!this.result) {
const [b, c, h, w] = params.targetShape;
// 计算确定targetShape所需Float32Array占用空间
this.result = new Float32Array(h * w * c);
}
output = this.fromPixels(input, params);
return output;
};
/**
* crop图像&重新设定图片tensor形状
* @param shape
*/
reshape(imageData, opt, scaleSize) {
const {sw, sh} = scaleSize;
const {width, height} = opt;
const hPadding = Math.ceil((sw - width) / 2);
const vPadding = Math.ceil((sh - height) / 2);
let data = imageData.data;
// channel RGB
let red = [];
let green = [];
let blue = [];
// 平均数
let mean = opt.mean;
// 标准差
let std = opt.std;
// 考虑channel因素获取数据
for (let i = 0; i < data.length; i += 4) {
let index = i / 4;
let vIndex = Math.floor(index / sw);
let hIndex = index - (vIndex * sw) - 1;
if (hIndex >= hPadding && hIndex < (hPadding + width) &&
vIndex >= vPadding && vIndex < (vPadding + height)) {
red.push(((data[i] / 255) - mean[0]) / std[0]); // red
green.push(((data[i + 1] / 255) - mean[1]) / std[1]); // green
blue.push(((data[i + 2] / 255) - mean[2]) / std[2]); // blue
}
}
// 转成 GPU 加速 NCHW 格式
let tmp = green.concat(blue);
return red.concat(tmp);
};
/**
* 全部转rgb * H * W
* @param shape
*/
allReshapeToRGB(imageData, opt, scaleSize) {
const {sw, sh} = scaleSize;
const [b, c, h, w] = opt.targetShape;
let data = imageData.data || imageData;
let mean = opt.mean;
let dataLength = data.length;
// let result = new Float32Array(dataLength * 3);
let result = this.result;
// let offsetR = 0;
// let offsetG = dataLength / 4;
// let offsetB = dataLength / 2;
let offset = 0;
let size = h * w;
// h w c
for (let i = 0; i < h; ++i) {
let iw = i * w;
for (let j = 0; j < w; ++j) {
let iwj = iw + j;
for (let k = 0; k < c; ++k) {
let a = iwj * 4 + k;
result[offset++] = (data[a] - mean[k]) / 256;
}
}
}
return result;
};
/**
* 根据scale缩放图像
* @param image
* @param params
* @return {Object} 缩放后的尺寸
*/
reSize(image, params) {
// 原始图片宽高
const width = this.pixelWidth;
const height = this.pixelHeight;
// 缩放后的宽高
let sw = width;
let sh = height;
// 最小边缩放到scale
if (width < height) {
sw = params.scale;
sh = Math.round(sw * height / width);
} else {
sh = params.scale;
sw = Math.round(sh * width / height);
}
this.fromPixels2DContext.canvas.width = sw;
this.fromPixels2DContext.canvas.height = sh;
this.fromPixels2DContext.drawImage(
image, 0, 0, sw, sh);
this.setInputCanvas(image);
return {sw, sh};
};
/**
* 缩放成目标尺寸并居中
*/
fitToTargetSize(image, params, center) {
// 目标尺寸
const targetWidth = params.targetSize.width;
const targetHeight = params.targetSize.height;
this.fromPixels2DContext.canvas.width = targetWidth;
this.fromPixels2DContext.canvas.height = targetHeight;
this.fromPixels2DContext.fillStyle = params.gapFillWith;
this.fromPixels2DContext.fillRect(0, 0, targetHeight, targetWidth);
// 缩放后的宽高
let sw = targetWidth;
let sh = targetHeight;
let x = 0;
let y = 0;
// target的长宽比大些 就把原图的高变成target那么高
if (targetWidth / targetHeight * this.pixelHeight / this.pixelWidth >= 1) {
sw = Math.round(sh * this.pixelWidth / this.pixelHeight);
x = Math.floor((targetWidth - sw) / 2);
}
// target的长宽比小些 就把原图的宽变成target那么宽
else {
sh = Math.round(sw * this.pixelHeight / this.pixelWidth);
y = Math.floor((targetHeight - sh) / 2);
}
// console.log(x, y, sw, sh);
if (center) {
this.fromPixels2DContext.drawImage(
image, x, y, sw, sh);
}
else {
this.fromPixels2DContext.drawImage(
image, 0, 0, sw, sh);
// currentPic = this.fromPixels2DContext.canvas.toDataURL();
}
this.setInputCanvas(image);
// window.currentPic = this.fromPixels2DContext.canvas;// test only, demele me
// document.getElementById('p-c').appendChild(this.fromPixels2DContext.canvas);// test only, demele me
return {sw: targetWidth, sh: targetHeight};
}
/**
* 设置原始video画布
* @param image 原始video
*/
setInputCanvas(image) {
// 原始图片宽高
const width = this.pixelWidth;
const height = this.pixelHeight;
// 画布设置
this.fromPixels2DContext2.canvas.width = width;
this.fromPixels2DContext2.canvas.height = height;
this.fromPixels2DContext2.drawImage(image, 0, 0, width, height);
}
/**
* 获取图像内容
* @param pixels
* @returns {Uint8ClampedArray}
*/
getImageData(pixels, scaleSize) {
const {sw, sh} = scaleSize;
// 复制画布上指定矩形的像素数据
let vals = this.fromPixels2DContext
.getImageData(0, 0, sw, sh);
// crop图像
// const width = pixels.width;
// const height = pixels.height;
return vals;
};
/**
* 计算灰度图
* @param imageData
* @returns {*}
*/
grayscale (imageData) {
let data = imageData.data;
for (let i = 0; i < data.length; i += 4) {
// 3 channel 灰度处理无空间压缩
let avg = (data[i] + data[i + 1] + data[i + 2]) / 3;
data[i] = avg; // red
data[i + 1] = avg; // green
data[i + 2] = avg; // blue
}
return data;
};
fromPixels(pixels, opt) {
let data;
// 原始video画布数据
let data2;
let scaleSize;
if (pixels instanceof HTMLImageElement || pixels instanceof HTMLVideoElement) {
this.pixelWidth = pixels.naturalWidth || pixels.width;
this.pixelHeight = pixels.naturalHeight || pixels.height;
if (opt.scale) { // 兼容以前的,如果有scale就是短边缩放到scale模式
scaleSize = this.reSize(pixels, opt);
data = this.getImageData(opt, scaleSize);
data2 = this.fromPixels2DContext2.getImageData(0, 0, this.pixelWidth, this.pixelHeight);
}
else if (opt.targetSize) { // 如果有targetSize,就是装在目标宽高里的模式
scaleSize = this.fitToTargetSize(pixels, opt);
data = this.getImageData(opt, scaleSize);
data2 = this.fromPixels2DContext2.getImageData(0, 0, this.pixelWidth, this.pixelHeight);
}
}
if (opt.gray) {
data = grayscale(data);
}
if (opt.reShape) {
data = this.reshape(data, opt, scaleSize);
}
if (opt.targetShape) {
data = this.allReshapeToRGB(data, opt, scaleSize);
}
return [{data: data, shape: opt.shape || opt.targetShape, name: 'image', canvas: data2}];
}
}
/* eslint-enable */
[中文版](./README_cn.md)
# PaddleJS FEED
Baidu paddlejs provides an input processing module implemented by JavaScript to help developers quickly achieve the input data format required by paddle.
\ No newline at end of file
# PaddleJS 输入前处理
百度PaddleJS提供使用 JavaScript 实现的输入处理模块,帮助开发者快速实现输入数据达到 Paddle 要求的输入数据格式。
/**
* @file 直接数据输入
* @author hantianjiao@baidu.com
*/
export default class dataFeed {
toFloat32Array(data) {
for (let i = 0; i < data.length; i++) {
this.f32Arr[i] = data[i];
}
}
getLengthFromShape(shape) {
return shape.reduce((a, b) => a * b);
}
loadData() {
return fetch(this.dataPath).then(res => res.json());
}
getOutput() {
return this.loadData().then(data => {
this.toFloat32Array(data);
return [{
data: this.f32Arr,
shape: this.shape,
name: 'x'
}];
});
}
async process(input) {
this.len = this.getLengthFromShape(input.shape);
if (!this.f32Arr || this.len > this.f32Arr.length) {
this.f32Arr = new Float32Array(this.len);
}
this.shape = input.shape;
this.dataPath = input.input;
let output = await this.getOutput();
return output;
}
}
\ No newline at end of file
/* eslint-disable */
/**
* @file io,loader相关输入输出
* @author wangqun@baidu.com
*/
export default class io {
constructor() {
this.fromPixels2DContext = document.createElement('canvas').getContext('2d');
};
fromPixels(pixels, opt) {
pixels = pixels.input;
const shape = opt[0].shape;
const numChannels = opt[0].shape[0];
if (pixels == null) {
throw new Error(
'pixels passed to tf.browser.fromPixels() can not be null');
}
let vals;
// tslint:disable-next-line:no-any
// tslint:disable-next-line:no-any
if (pixels.getContext != null) {
// tslint:disable-next-line:no-any
vals = pixels
.getContext('2d')
.getImageData(0, 0, pixels.width, pixels.height)
.data;
} else if (pixels instanceof ImageData) {
vals = pixels.data;
} else if (
pixels instanceof HTMLImageElement ||
pixels instanceof HTMLVideoElement) {
if (this.fromPixels2DContext == null) {
throw new Error(
'Can\'t read pixels from HTMLImageElement outside ' +
'the browser.');
}
this.fromPixels2DContext.canvas.width = pixels.width;
this.fromPixels2DContext.canvas.height = pixels.height;
this.fromPixels2DContext.drawImage(
pixels, 0, 0, pixels.width, pixels.height);
vals = this.fromPixels2DContext
.getImageData(0, 0, pixels.width, pixels.height)
.data;
} else {
}
let values;
if (numChannels === 4) {
values = new Array(vals);
} else {
const numPixels = (shape[1] || pixels.width) * (shape[2] ||pixels.height);
// console.log(numPixels, numPixels * numChannels);
values = new Array(numPixels * numChannels);
for (let i = 0; i < numPixels; i++) {
for (let channel = 0; channel < numChannels; ++channel) {
values[i * numChannels + channel] = vals[i * 4 + channel];
}
}
}
// console.log(pixels.height, pixels.width, numChannels, values);
// const outShape: [number, number, number] =
// [pixels.height, pixels.width, numChannels];
values = [
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
7.0,
0.0,
0.0,
0.0,
0.0,
0.0,
6.0,
7.0,
0.0,
0.0,
0.0,
0.0,
3.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
3.0,
0.0,
0.0,
14.0,
16.0,
8.0,
1.0,
0.0,
0.0,
0.0,
14.0,
1.0,
0.0,
0.0,
14.0,
4.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
5.0,
13.0,
0.0,
0.0,
0.0,
9.0,
0.0,
27.0,
0.0,
0.0,
0.0,
5.0,
0.0,
0.0,
3.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
4.0,
0.0,
0.0,
5.0,
11.0,
5.0,
4.0,
8.0,
0.0,
0.0,
15.0,
7.0,
0.0,
2.0,
7.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
11.0,
2.0,
0.0,
0.0,
0.0,
0.0,
4.0,
11.0,
3.0,
0.0,
2.0,
0.0,
5.0,
3.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
2.0,
0.0,
0.0,
10.0,
6.0,
0.0,
0.0,
0.0,
0.0,
4.0,
9.0,
0.0,
0.0,
2.0,
3.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
8.0,
0.0,
8.0,
11.0,
0.0,
4.0,
113.0,
202.0,
249.0,
255.0,
255.0,
135.0,
44.0,
0.0,
7.0,
3.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
2.0,
0.0,
2.0,
0.0,
33.0,
188.0,
230.0,
101.0,
52.0,
6.0,
106.0,
162.0,
183.0,
11.0,
0.0,
4.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
9.0,
0.0,
4.0,
58.0,
230.0,
189.0,
31.0,
0.0,
3.0,
0.0,
14.0,
0.0,
204.0,
17.0,
7.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
20.0,
24.0,
231.0,
181.0,
0.0,
0.0,
5.0,
4.0,
2.0,
0.0,
119.0,
228.0,
0.0,
1.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
173.0,
232.0,
32.0,
4.0,
10.0,
0.0,
0.0,
7.0,
79.0,
230.0,
108.0,
18.0,
0.0,
10.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
2.0,
100.0,
246.0,
47.0,
0.0,
5.0,
0.0,
1.0,
8.0,
63.0,
216.0,
109.0,
0.0,
0.0,
6.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
8.0,
122.0,
210.0,
0.0,
31.0,
0.0,
8.0,
28.0,
109.0,
235.0,
182.0,
0.0,
13.0,
0.0,
22.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
128.0,
233.0,
0.0,
6.0,
66.0,
126.0,
180.0,
191.0,
220.0,
27.0,
0.0,
0.0,
11.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
78.0,
246.0,
233.0,
220.0,
255.0,
199.0,
59.0,
235.0,
68.0,
12.0,
0.0,
1.0,
2.0,
1.0,
10.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
2.0,
0.0,
80.0,
120.0,
139.0,
62.0,
0.0,
155.0,
211.0,
5.0,
10.0,
0.0,
0.0,
0.0,
3.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
1.0,
0.0,
5.0,
2.0,
0.0,
0.0,
90.0,
255.0,
70.0,
0.0,
0.0,
0.0,
9.0,
0.0,
0.0,
9.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
17.0,
5.0,
0.0,
11.0,
47.0,
227.0,
159.0,
0.0,
0.0,
8.0,
0.0,
0.0,
2.0,
6.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
5.0,
0.0,
0.0,
0.0,
4.0,
213.0,
207.0,
19.0,
0.0,
0.0,
3.0,
12.0,
0.0,
2.0,
4.0,
2.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
1.0,
0.0,
16.0,
7.0,
91.0,
253.0,
50.0,
0.0,
0.0,
4.0,
0.0,
2.0,
0.0,
1.0,
2.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
2.0,
5.0,
0.0,
45.0,
252.0,
131.0,
0.0,
8.0,
0.0,
7.0,
0.0,
15.0,
5.0,
0.0,
0.0,
2.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
1.0,
8.0,
11.0,
207.0,
205.0,
30.0,
2.0,
0.0,
0.0,
22.0,
0.0,
0.0,
4.0,
9.0,
11.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
14.0,
155.0,
255.0,
28.0,
0.0,
0.0,
6.0,
4.0,
0.0,
5.0,
150.0,
210.0,
91.0,
17.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
14.0,
40.0,
250.0,
91.0,
0.0,
0.0,
7.0,
0.0,
0.0,
24.0,
0.0,
10.0,
130.0,
183.0,
147.0,
11.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
207.0,
146.0,
4.0,
0.0,
4.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
25.0,
237.0,
29.0,
0.0,
12.0,
0.0,
0.0,
14.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
13.0,
0.0,
15.0,
7.0,
0.0,
9.0,
2.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
4.0,
0.0,
4.0,
3.0,
4.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
];
return [{data: values, shape: shape, name: 'pixel'}];
}
}
/* eslint-enable */
/* eslint-disable */
import VSHADER from '../shader/v_shader';
import VSHADER2 from '../shader/v_shader2';
/**
* @file gpu运算
* @author wangqun@baidu.com, yangmingming@baidu.com
*/
const CONF = {
alpha: false,
antialias: false,
premultipliedAlpha: false,
preserveDrawingBuffer: false,
depth: false,
stencil: false,
failIfMajorPerformanceCaveat: true
};
const MAX_WAIT = 100;
export default class gpu {
constructor(opts = {}) {
// 版本, 默认webgl version 2.0
this.version = 2;
this.opts = opts;
opts.width_raw_canvas = Number(opts.width_raw_canvas) || 512;
opts.height_raw_canvas = Number(opts.height_raw_canvas) || 512;
const canvas = opts.el ? opts.el : document.createElement('canvas');
canvas.addEventListener('webglcontextlost', evt => {
evt.preventDefault();
console.log('webgl context is lost~');
}, false);
let gl = canvas.getContext('webgl2', CONF);
if (!!gl) {
// 开启float32
this.version = 2;
this.textureFloat = gl.getExtension('EXT_color_buffer_float');
this.internalFormat = gl.R32F;
this.textureFormat = gl.RED;
this.downloadInternalFormat = gl.RGBA32F;
} else {
gl = canvas.getContext('webgl', CONF) || canvas.getContext('experimental-webgl', CONF);
this.version = 1;
this.internalFormat = gl.RGBA;
this.textureFormat = gl.RGBA;
this.downloadInternalFormat = gl.RGBA;
if (!gl) {
this.version = 0;
alert('当前环境创建webgl context失败');
} else {
// 开启扩展
this.textureFloat = gl.getExtension('OES_texture_float');
console.log('float extension is started or not? ' + !!this.textureFloat);
}
}
// 关闭相关功能
gl.disable(gl.DEPTH_TEST);
gl.disable(gl.STENCIL_TEST);
gl.disable(gl.BLEND);
gl.disable(gl.DITHER);
gl.disable(gl.POLYGON_OFFSET_FILL);
gl.disable(gl.SAMPLE_COVERAGE);
gl.enable(gl.SCISSOR_TEST);
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.BACK);
this.gl = gl;
this.initCache();
// 同步查看次数
this.waits = 0;
console.log('WebGl版本是 ' + this.version);
console.log('MAX_TEXTURE_SIZE is ' + gl.getParameter(gl.MAX_TEXTURE_SIZE));
console.log('MAX_TEXTURE_IMAGE_UNITS is ' + gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS));
}
getWebglVersion() {
return this.version;
}
initCache() {
// 运行次数
this.times = 0;
const gl = this.gl;
// 顶点数据
let vertices = new Float32Array([
-1.0, 1.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 0.0,
1.0, 1.0, 1.0, 1.0,
1.0, -1.0, 1.0, 0.0]);
this.vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, this.vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
// shader
this.vertexShader = null;
// 生成vertextShader
this.initShader(this.version === 2 ? VSHADER2 : VSHADER);
this.fragmentShader = null;
// 上一个texture
this.prevTexture = null;
// 当前op输出texture
this.currentTexture = null;
// 帧缓存
this.frameBuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, this.frameBuffer);
// 计算texture cache
this.cacheTextures = {};
this.uniformLocations = {};
// texture buffer
this.outTextures = [];
// pbo
this.pbo = gl.createBuffer();
}
runVertexShader(program) {
const gl = this.gl;
let aPosition = gl.getAttribLocation(program, 'position');
// Turn on the position attribute
gl.enableVertexAttribArray(aPosition);
// Bind the position buffer.
gl.bindBuffer(gl.ARRAY_BUFFER, this.vertexBuffer);
gl.vertexAttribPointer(aPosition, 2, gl.FLOAT, false, 16, 0);
}
setOutProps(opts) {
this.width_shape_out = opts.width_shape || 1;
this.height_shape_out = opts.height_shape || 1;
this.width_texture_out = opts.width_texture || 1;
this.height_texture_out = opts.height_texture || 1;
this.channel = opts.channel || 0;
this.total_shape = opts.total_shape || 0;
}
isFloatingTexture() {
return (this.textureFloat !== null);
}
createProgram(fshader, out) {
const gl = this.gl;
const program = gl.createProgram();
gl.attachShader(program, this.vertexShader);
gl.attachShader(program, fshader);
gl.linkProgram(program);
// 生成output的texture缓存
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, // Target, matches bind above.
0, // Level of detail.
this.downloadInternalFormat, // Internal format.
out.width_texture,
out.height_texture,
0, // Always 0 in OpenGL ES.
gl.RGBA, // Format for each pixel.
gl.FLOAT, // Data type for each chanel.
null);
gl.bindTexture(gl.TEXTURE_2D, null);
this.outTextures.push(texture);
return program;
}
setProgram(program, isRendered) {
const gl = this.gl;
gl.useProgram(program);
this.program = program;
if (!isRendered) {
this.runVertexShader(program);
}
}
attachShader(fshader) {
const gl = this.gl;
// let index = this.textureBufferIndex % 2;
// const program = this.programs[index];
// this.program = program;
const program = this.program;
// if (this.times < 2) {
// gl.attachShader(program, this.vertexShader);
// }
this.textureBufferIndex = (this.textureBufferIndex + 1) >= 2 ? 0 : 1;
if (!!this.fragmentShader) {
gl.detachShader(program, this.fragmentShader);
}
this.gl.attachShader(program, fshader);
this.fragmentShader = fshader;
gl.linkProgram(program);
if (this.times++ === 0) {
gl.useProgram(program);
this.runVertexShader();
}
}
create(vshaderCode, fshaderCode) {
let gl = this.gl;
if (this.program) {
this.dispose();
}
// 创建 & 绑定程序对象
let program = this.program = gl.createProgram();
// 创建&绑定vertex&frament shader
this.initShader(vshaderCode);
this.fragmentShader = this.initShader(fshaderCode, 'fragment');
this.gl.attachShader(program, this.vertexShader);
this.gl.attachShader(program, this.fragmentShader);
gl.linkProgram(program);
gl.useProgram(program);
let aPosition = gl.getAttribLocation(program, 'position');
// Turn on the position attribute
gl.enableVertexAttribArray(aPosition);
// Bind the position buffer.
gl.bindBuffer(gl.ARRAY_BUFFER, this.vertexBuffer);
gl.vertexAttribPointer(aPosition, 2, gl.FLOAT, false, 16, 0);
}
/**
* 初始化shader
* @param code shader代码
* @param type shader类型
* @return {object} 初始化成功返回shader
*/
initShader(code, type = 'vertex') {
const shaderType = type === 'vertex' ? this.gl.VERTEX_SHADER : this.gl.FRAGMENT_SHADER;
let shader;
if (type === 'vertex' && this.vertexShader) {
shader = this.vertexShader;
} else {
shader = this.gl.createShader(shaderType);
if (type === 'vertex') {
this.vertexShader = shader;
}
this.gl.shaderSource(shader, code);
this.gl.compileShader(shader);
if (!this.gl.getShaderParameter(shader, this.gl.COMPILE_STATUS)) {
throw new Error("compile: " + this.gl.getShaderInfoLog(shader));
}
}
return shader;
}
/**
* 更新fragment shader
* @param code shader代码
* @return {boolean} 更新成功过返回true
*/
updateShader(code) {
this.gl.useProgram(this.program);
// 删除fragment shader
if (this.fragmentShader) {
this.gl.detachShader(this.program, this.fragmentShader);
this.gl.deleteShader(this.fragmentShader);
// 删除texture
this.gl.deleteTexture(this.texture);
}
// 更新
this.fragmentShader = this.initShader(code, 'fragment');
return true;
}
/**
* 创建并绑定framebuffer, 之后attach a texture
* @param {WebGLTexture} texture 材质
* @returns {WebGLFramebuffer} The framebuffer
*/
attachFrameBuffer(iLayer) {
this.prevTexture = this.currentTexture;
// this.currentTexture = this.textureBuffer[this.textureBufferIndex % 2];
// this.textureBufferIndex = (this.textureBufferIndex + 1) >= 2 ? 0 : 1;
this.currentTexture = this.outTextures[iLayer];
console.log('this.currentTexture', this.currentTexture);
const gl = this.gl;
gl.framebufferTexture2D(gl.FRAMEBUFFER, // The target is always a FRAMEBUFFER.
gl.COLOR_ATTACHMENT0, // We are providing the color buffer.
gl.TEXTURE_2D, // This is a 2D image texture.
this.currentTexture, // The texture.
0 // 0, we aren't using MIPMAPs
);
gl.viewport(
0,
0,
this.width_texture_out,
this.height_texture_out
);
gl.scissor(
0,
0,
this.width_texture_out,
this.height_texture_out
);
return this.frameBuffer;
}
// 帧缓存检测
frameBufferIsComplete() {
let gl = this.gl;
let message;
let status;
let value;
status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
switch (status)
{
case gl.FRAMEBUFFER_COMPLETE:
message = "Framebuffer is complete.";
value = true;
break;
case gl.FRAMEBUFFER_UNSUPPORTED:
message = "Framebuffer is unsupported";
value = false;
break;
case gl.FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
message = "Framebuffer incomplete attachment";
value = false;
break;
case gl.FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
message = "Framebuffer incomplete (missmatched) dimensions";
value = false;
break;
case gl.FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
message = "Framebuffer incomplete missing attachment";
value = false;
break;
default:
message = "Unexpected framebuffer status: " + status;
value = false;
}
return {isComplete: value, message: message};
}
/**
* 初始化材质
* @param {int} index 材质索引
* @param {string} tSampler 材质名称
* @param {Object} bufferData 数据
* @param {boolean} isRendered 是否已运行过
*/
initTexture(index, item, iLayer, isRendered) {
const gl = this.gl;
let texture;
if (!item.data) {
texture = this.prevTexture;
} else {
// texture = gl.createTexture();
if (isRendered && (iLayer > 0 || (iLayer === 0 && item.tensor !== 'origin'))) {
const tData = this.cacheTextures['' + iLayer];
texture = tData[item.variable + '_' + item.tensor];
} else {
texture = gl.createTexture();
if (index === 0) {
this.cacheTextures['' + iLayer] = this.cacheTextures['' + iLayer] || {};
}
this.cacheTextures['' + iLayer][item.variable + '_' + item.tensor] = texture;
}
}
gl.activeTexture(gl[`TEXTURE${index}`]);
gl.bindTexture(gl.TEXTURE_2D, texture);
if (item.data && (!isRendered || (isRendered && iLayer === 0 && item.tensor === 'origin'))) {
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D,
0,
this.internalFormat,
item.width_texture,
item.height_texture,
0,
this.textureFormat,
gl.FLOAT,
item.data,
0);
}
}
getUniformLoc(name, ilayer, isRendered) {
if (isRendered) {
return this.uniformLocations['' + ilayer][name];
}
let loc = this.gl.getUniformLocation(this.program, name);
if (loc === null) throw `getUniformLoc ${name} err`;
// 缓存
this.uniformLocations['' + ilayer] = this.uniformLocations['' + ilayer] || {};
this.uniformLocations['' + ilayer][name] = loc;
return loc;
}
// 生成帧缓存的texture
makeTexure(type, data, opts = {}) {
const gl = this.gl;
let index = this.textureBufferIndex % 2;
let texture = this.textureBuffer[index];
gl.bindTexture(gl.TEXTURE_2D, texture);
// Pixel format and data for the texture
gl.texImage2D(gl.TEXTURE_2D, // Target, matches bind above.
0, // Level of detail.
gl.RGBA, // Internal format.
opts.width_texture_out || this.width_texture_out,
opts.height_texture_out || this.height_texture_out,
0, // Always 0 in OpenGL ES.
gl.RGBA, // Format for each pixel.
type, // Data type for each chanel.
data); // Image data in the described format, or null.
// Unbind the texture.
// gl.bindTexture(gl.TEXTURE_2D, null);
this.attachFrameBuffer();
return texture;
}
render(data = [], iLayer = 0, isRendered = false) {
const gl = this.gl;
let that = this;
let textureIndex = 0;
data.forEach(item => {
if (item.type === 'texture') {
that.initTexture(textureIndex, item, iLayer, isRendered);
gl.uniform1i(that.getUniformLoc(item.variable + '_' + item.tensor, iLayer, isRendered), textureIndex++);
}
else if (item.type === 'uniform') {
gl[item.setter](that.getUniformLoc(item.variable + '_' + item.tensor, iLayer, isRendered), item.data);
}
});
// gl.clearColor(.0, .0, .0, 1);
// gl.clear(gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
createPBO() {
const gl2 = this.gl;
const buffer = this.pbo;
gl2.bindBuffer(gl2.PIXEL_PACK_BUFFER, buffer);
const sizeBytes = 4 * 4 * this.width_texture_out * this.height_texture_out;
gl2.bufferData(gl2.PIXEL_PACK_BUFFER, sizeBytes, gl2.STREAM_READ);
gl2.readPixels(0, 0, this.width_texture_out, this.height_texture_out, gl2.RGBA, gl2.FLOAT, 0);
gl2.bindBuffer(gl2.PIXEL_PACK_BUFFER, null);
return buffer;
}
downloadFoat32TensorFromBuffer(buffer) {
const gl2 = this.gl;
const size = 4 * this.width_texture_out * this.height_texture_out;
const pixels = new Float32Array(size);
gl2.bindBuffer(gl2.PIXEL_PACK_BUFFER, buffer);
gl2.getBufferSubData(gl2.PIXEL_PACK_BUFFER, 0, pixels);
gl2.bindBuffer(gl2.PIXEL_PACK_BUFFER, null);
// log.start('后处理-readloop');
// let result = [];
// let offset = 0;
// for (let h = 0; h < this.height_texture_out; h++) {
// // 纪录第1和2行数据
// let temp1 = [];
// let temp2 = [];
// for (let w = 0; w < this.width_texture_out; w++) {
// temp1.push(pixels[offset]);
// temp1.push(pixels[offset + 1]);
// temp2.push(pixels[offset + 2]);
// temp2.push(pixels[offset + 3]);
// offset += 4;
// }
// result = result.concat(temp1);
// result = result.concat(temp2);
// }
let result = [];
for (let i = 0; i < this.width_texture_out * this.height_texture_out; i++) {
result.push(pixels[4 * i]);
}
// const result = Array.prototype.slice.call(pixels);
// console.dir(['result', result]);
// log.end('后处理-readloop');
return result;
}
getWebglError(status) {
const gl2 = this.gl;
switch (status) {
case gl2.NO_ERROR:
return 'NO_ERROR';
case gl2.INVALID_ENUM:
return 'INVALID_ENUM';
case gl2.INVALID_VALUE:
return 'INVALID_VALUE';
case gl2.INVALID_OPERATION:
return 'INVALID_OPERATION';
case gl2.INVALID_FRAMEBUFFER_OPERATION:
return 'INVALID_FRAMEBUFFER_OPERATION';
case gl2.OUT_OF_MEMORY:
return 'OUT_OF_MEMORY';
case gl2.CONTEXT_LOST_WEBGL:
return 'CONTEXT_LOST_WEBGL';
default:
return `Unknown error code ${status}`;
}
}
createAndWaitForFence() {
const gl2 = this.gl;
const isFenceEnabled = (gl2.fenceSync !== null);
let isFencePassed = () => true;
if (isFenceEnabled) {
const sync = gl2.fenceSync(gl2.SYNC_GPU_COMMANDS_COMPLETE, 0);
gl2.flush();
isFencePassed = () => {
const status = gl2.clientWaitSync(sync, 0, 0);
return status === gl2.ALREADY_SIGNALED ||
status === gl2.CONDITION_SATISFIED;
};
}
return new Promise(resolve => {
this.pollItem(isFencePassed, resolve);
});
}
pollItem(isDone, resolveFn) {
const fn = () => {
if (isDone()) {
resolveFn();
return;
}
setTimeout(fn, 1);
};
fn();
}
compute() {
let gl = this.gl;
// log.start('后处理-readinside');
const tt = +Date.now();
let pixels = new Float32Array(this.width_texture_out * this.height_texture_out * 4);
// gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);
const tt2 = +Date.now();
gl.readPixels(0, 0, this.width_texture_out, this.height_texture_out, gl.RGBA, gl.FLOAT, pixels, 0);
// console.log('本次读取数据时间是' + (+Date.now() - tt2)+ ',' + (tt2 - tt));
// log.end('后处理-readinside');
// log.start('后处理-readloop');
let result = [];
for (let i = 0; i < this.width_texture_out * this.height_texture_out; i++) {
result.push(pixels[4 * i]);
}
// log.end('后处理-readloop');
return result;
}
dispose() {
const gl = this.gl;
// this.cacheTextures.forEach(texture => {
// gl.deleteTexture(texture);
// });
this.cacheTextures = {};
this.programs.forEach(program => {
gl.detachShader(program, this.vertexShader);
gl.deleteShader(this.vertexShader);
gl.deleteProgram(program);
});
this.programs = [];
}
}
/* eslint-disable */
import GraphExecutor from '../executor/executor';
import IO from '../feed/imageFeed';
import Runtime from '../runtime/runtime';
import OpData from '../utils/opData';
import Factory from '../factory/fshader/factory';
import Utils from '../utils/utils';
/**
* @file Graph,绘制生成model网络
* @author wangqun@baidu.com
*/
let start = 0;
// 生成factory实例
const factory = new Factory({});
// 获取op的输入配置
const opConfs = factory.getOpConfs();
export default class Graph {
constructor(options) {
this.version = '0.0.1';
this.handler = 'io.IOHandler';
this.weightMap = '';
this.options = options || {};
// feed数据
this.feed = null;
this.index = 0;
this.feedOp = null;
this.feedItem = null;
this.test = false;
this.isExecuted = false;
// 网络层数
this.iLayer = 0;
if (this.options && this.options.options && this.options.options.test === true) {
this.test = true;
}
if (!this.inst) {
// op runner
this.inst = Runtime.init();
factory.setWebglVersion(this.inst.getWebglVersion());
}
}
buildOpData(op) {
const executor = this.constructExecutor(op);
const opData = new OpData(op.type, executor.inputs, executor.outputs, executor.attrs);
const name = opData.name;
const fsCode = factory.buildShader(name, opData.data);
opData.fsCode = fsCode;
opData.program = this.inst.createProgram(fsCode, opData.tensor['out']);
opData.renderData = opConfs[name].map(elem => {
let item = Object.assign({}, elem);
const tensorData = opData.tensor[item.tensor];
if (item.type === 'texture') {
item.data = tensorData.data;
if (this.feedOp.id === op.id && item.tensor === 'origin') {
item.shape = tensorData.shape;
this.feedItem = item;
}
item['width_texture'] = tensorData['width_texture'];
item['height_texture'] = tensorData['height_texture'];
item['channel'] = tensorData['channel'];
} else if (item.type === 'uniform') {
item.data = tensorData[item.variable];
}
return item;
});
// console.timeEnd('opData.renderData');
opData.iLayer = this.iLayer++;
op.opData = opData;
// delete op.inputs;
// delete op.outputs;
// delete op.attrs;
}
execute_(executor) {
if (executor.type === 'fetch') {
return;
}
executor.execute(this.inst, this.isExecuted);
// if (executor.next && start++ < 2) {
if (executor.next) {
const id = executor.next;
const next = this.getTensor(id);
this.execute_(next[0]);
}
}
/**
* Executes inference for the model for given input tensors.
* @param inputs
* @param outputs
* @returns {*}
*/
execute(inputs) {
this.feed = inputs;
const executor = this.getNetsStart(this.weightMap);
if (!this.inst) {
this.inst = Runtime.init({
'width_raw_canvas': 512,
'height_raw_canvas': 512
});
}
if (this.isExecuted) {
this.updateFeed();
}
this.execute_(executor[0]);
this.isExecuted = true;
return this.inst;
}
updateFeed() {
this.feedItem.data = this.feed.input[0].data;
// Utils.img2texture(this.feedItem);
}
/**
* predict enter
* @param inputs
* @param config
*/
predict(inputs, config) {
return this.execute_(inputs, true, this.outputNodes);
}
getTensorAttr(name) {
return this.data.vars.filter((item, i) => {
if (name === item.name)
return item;
});
}
constructExecutor(executor) {
let that = this;
const inputName = executor.inputsName[0];
const input = executor.inputs;
const output = executor.outputs;
Object.keys(output).forEach(function(key){
output[key] = that.getTensorAttr(output[key][0]);
});
Object.keys(input).forEach(function(key){
if (that.test && ((key === 'Input') || (key === 'X'))) {
input[key] = that.getTensorAttr(input[key][0]);
that.feedOp = executor;
}
else if ((key === 'Input') && (inputName === 'pixel')) {
// const pixel = that.getTensorAttr(inputName);
// const io = new IO();
// input[key] = io.fromPixels(that.feed, pixel);
input[key] = that.feed.input;
that.feedOp = executor;
}
else if ((key === 'Input') && (inputName === 'image' || inputName === 'x')) {
// that.feed.input[0].data = that.testData;
input[key] = that.feed.input;
that.feedOp = executor;
}
else {
input[key] = that.getTensorAttr(input[key][0]);
}
});
// console.log(input);
return {
inputs: input,
outputs: output,
attrs: executor.attrs,
type: executor.type,
next: executor.next
};
}
/**
* Construct Ops Relationship
* @param ops
* @returns {*}
*/
constructOpsMap(ops) {
return ops.map((item, idx) => {
const outputsName = item.outputsName[0];
const next = this.getNextExecutor(ops, outputsName);
if (next.length > 0) {
item.next = next[0].id;
}
return item;
});
}
/**
* Get Ops Nets Start Node
* @param ops
* @returns {*}
*/
getNetsStart(ops) {
return ops.filter((item) => {
if (item.type === 'feed') {
return true;
}
});
}
/**
* Get Ops Nets Last Node
* @param ops
* @returns {*}
*/
getNetsEnd(ops) {
return ops.filter((item) => {
if (item.type === 'fetch') {
return true;
}
});
}
/**
* get tensor by id
* @param id
* @returns {*}
*/
getTensor(id) {
return this.weightMap.filter((item, i) => {
if (id === item.id)
return item;
});
}
/**
* Create Ops Executor Object Map
* @param ops
* @returns {*}
*/
createOpsMap(ops) {
return ops.map((item, idx) => {
item.idx = idx;
const graphExecutor = new GraphExecutor(item);
return graphExecutor;
});
}
/**
* Get The Next Executor need Exec
* @param ops
* @param id
* @returns {*}
*/
getNextExecutor(ops, id) {
return ops.filter((item, key) => {
if (id === item.inputsName[0]) {
return true;
}
});
}
/**
* dispose
*/
dispose() {
this.executor.dispose();
}
}
/* eslint-enable */
[中文版](./README_cn.md)
# PaddleJS odel loader
Baidu paddlejs uses this loader to get the model to the browser. The model loader can load browser friendly JSON file types and binary file types, supports single file loading and file fragment loading, and greatly uses the characteristics of browser parallel request to load reasoning model.
## Demonstration
Create the paddy object, specify the add model address, add configuration parameters, and load the model through the load method.
## Parameter description
| 表格 | 参数 | 描述 |
| ------------- | ------------- | ------------- |
| MODEL_ADDRESS | dir | 存放模型的文件夹 |
| MODEL_ADDRESS | main | 主文件 |
| options | multipart | 是否分片获取 |
| options | dataType | binary/json |
| options | fileCount | 分片数量 |
| options | ietest | 是否开启测试输出 |
```bash
const MODEL_ADDRESS = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
const paddle = new Paddle({
urlConf: MODEL_ADDRESS,
options: {
multipart: true,
dataType: 'binary',
options: {
fileCount: n, // 切成了n文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
}
}
});
model = await paddle.load();
```
## Browser coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
# PaddleJS Examples
百度 PaddleJS 的使用这个加载器进行模型获取到浏览器。模型加载器可以加载浏览器友好的json文件类型和二进制文件类型,支持单文件加载和文件分片加载,极大的利用浏览器并行请求的特性加载推理模型。
## 使用方法
创建Paddle对象,指定加模型地址,添加配置参数,通过load方法加载模型。
## 参数说明
| 表格 | 参数 | 描述 |
| ------------- | ------------- | ------------- |
| MODEL_ADDRESS | dir | 存放模型的文件夹 |
| MODEL_ADDRESS | main | 主文件 |
| options | multipart | 是否分片获取 |
| options | dataType | binary/json |
| options | fileCount | 分片数量 |
| options | ietest | 是否开启测试输出 |
```bash
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: 'model.json', // 主文件
};
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
multipart: true,
dataType: 'binary',
options: {
fileCount: n, // 切成了n文件
getFileName(i) { // 获取第i个文件的名称
return 'chunk_0.dat';
}
}
}
});
model = await paddle.load();
```
## 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
\ No newline at end of file
/* eslint-disable */
/**
* @file loader,model加载器
* @author wangqun@baidu.com
*/
export default class Loader {
constructor(modelGonfig, options) {
this.version = '0.0.1';
this.data = {};
this.modelGonfig = modelGonfig;
this.options = options;
this.multipart = false;
this.test = false;
// fetch xhr jsonp
this.params = {type: 'fetch'};
// 设置分片加载model
if (this.options) {
this.multipart = this.options.multipart;
if (options.dataType === 'binary') {
this.binaryOption = options.options;
this.dataType = options.dataType;
}
if (options.test) {
this.test = true;
}
}
if (!this.loadOptions) {
this.loadOptions = {};
}
}
fetchOneChunk(path) {
return this.fetch(path).then(request => {
return request.arrayBuffer();
})
}
fetchJson(path) {
return this.fetch(path).then(request => {
return request.json();
})
}
fetchChunks() {
let counts = this.binaryOption.fileCount;
let chunkArray = [];
for (let i = 1; i <= counts; i++) {
chunkArray.push(
this.fetchOneChunk(this.modelGonfig.dir + this.binaryOption.getFileName(i))
);
}
// console.time('加载时间');
return Promise.all(chunkArray).then(chunks => {
// console.timeEnd('加载时间');
let chunksLength = 0;
let f32Array = [];
let float32Chunk;
chunks.forEach(i => {
float32Chunk = new Float32Array(i);
f32Array.push(float32Chunk);
chunksLength += float32Chunk.length;
});
this.allData = new Float32Array(chunksLength);
let offset = 0;
f32Array.forEach(i => {
i.forEach(num => {
this.allData[offset] = num;
offset += 1;
})
});
});
}
fetchData(name) {
const path = this.modelGonfig.dir + name + '.json';
let load = new Promise((resolve, reject) => {
fetch(path, {
method: 'get', mode: 'cors', credentials: "include",
headers: { 'Content-Type': 'application/json;charset=utf-8'}})
.then(response => response.json())
.then(responseData => resolve(responseData))
.then(err => reject(err))
})
return load;
}
async fetchAllDate (arr) {
const TMP_SCHEME_REGEX = /\.tmp/;
const TMP_REGEX = /\-/;
let requesterArr = arr.map(item => {
if (item.name
&& item.name.match(TMP_SCHEME_REGEX) === null
&& item.name.match(TMP_REGEX) === null) {
return this.fetchData(item.name).then(data => item.data = data);
}
return Promise.resolve();
});
return Promise.all(requesterArr);
}
traverse (arr) {
const TMP_SCHEME_REGEX = /\.tmp/;
const TMP_REGEX = /\-/;
let marker = 0; // 读到哪个位置了
let len; // 当前op长度
arr.filter(item => {
return item.name
&& item.name.match(TMP_SCHEME_REGEX) === null
&& item.name.match(TMP_REGEX) === null;
})
.forEach(item => {
len = item.shape.reduce((a, b) => a * b); // 长度为shape的乘积
item.data = this.allData.slice(marker, marker + len);
marker += len;
});
}
fetch(path, params) {
params = params || this.params;
let method = params.method || 'get';
let mode = params.mode || 'no-cors';
let myHeaders = new Headers();
return fetch(path, {
method: method,
// mode: mode,
// credentials: 'include',
headers: myHeaders
});
}
fetchModel(params) {
params = params || this.params;
const path = this.modelGonfig.dir + this.modelGonfig.main;
let load = null;
// jsonp请求方式
if (params && params.type === 'jsonp') {
let json;
let s = document.createElement('script');
s.src = path + '&jsonpCallback=fn';
window.fn = function(data) {
json = data;
// console.log(json);
};
//当script被插入文档中时,src中的资源就会开始加载
document.body.appendChild(s);
load = new Promise((resolve, reject) => {
s.onload = function(e) {
resolve(json);
}
s.onerror = function() {
reject(json);
}
});
this.data = load;
}
// 原生fetch
else if (params.type === 'fetch') {
load = new Promise((resolve, reject) => {
this.fetch(path, params)
.then(response => response.json())
.then(responseData => resolve(responseData))
.then(err => reject(err))
});
this.data = load;
}
// ajax
else if (params.type === 'xhr') {
this.data = load;
}
return load;
}
async load() {
let that = this;
const artifacts = this.data = await this.fetchModel();
if (this.multipart === true) {
if (this.dataType === 'binary') {
await this.fetchChunks()
.then(() => this.traverse(artifacts.vars));
}
else {
await that.fetchAllDate(artifacts.vars);
}
}
return artifacts;
}
}
/* eslint-enable */
[中文版](./README_cn.md)
# PaddleJS Examples
Baidu paddlejs uses the ready-made JavaScript model or transforms the paddle model to run in the browser.
## Demonstration
At present, tiny Yolo model can run within 30ms for web project, which is enough for general real-time scenarios.
## Browser coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## Building deployment
```bash
cd web # Go to root
npm i # Installation dependency
mkdir dist # Create resource directory
cd dist # Enter resource directory
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # Get models
mv Paddle-Web-Models/separablemodel . # Move the model to the designated location
cd .. # Root directory
npm run tinyYolo # run tinyYolo
```
## Preview
1. Open url: https://localhost:端口号/
2. Click the upload picture button。
## Result
![image](./tinyYolo/demoshow.png)
# PaddleJS Examples
百度PaddleJS使用现成的 JavaScript 模型或转换 Paddle 模型以在浏览器中运行。
## 演示
目前Web项目运行TinyYolo模型可以达到30ms以内,对于一般的实时场景已经足够应对。
### 模块化
## 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
## 构建部署
```bash
cd web # 进入根目录
npm i # 安装依赖
mkdir dist # 创建资源目录
cd dist # 进入资源目录
git clone https://github.com/DerekYangMing/Paddle-Web-Models.git # 获取模型
mv Paddle-Web-Models/separablemodel . # 移动模型到制定地点
cd .. # 返回根目录
npm run tinyYolo # 启动 tinyYolo 在线推理服务
```
## 如何预览 demo
1. 在浏览器中打开url: https://localhost:端口号/
2. 点击【开始检测】按钮。
3. 将人脸对准摄像头,没有问题的话,可以正常检测到人脸。
## 效果
![image](./tinyYolo/demoshow.png)
/* eslint-disable */
import 'babel-polyfill';
import Loader from '../loader/loader';
import Graph from '../graph/graph';
/**
* @file paddle对象,负责加载模型和执行在线推理
* @author wangqun@baidu.com
*/
export default class Paddle {
constructor(options) {
this.version = '0.0.1';
this.loader = '';
this.options = options;
this.graph = '';
this.multipart = false;
// feed数据
this.feed = null;
this.index = 0;
this.feedOp = null;
this.feedItem = null;
this.test = false;
this.isExecuted = false;
// 网络层数
this.iLayer = 0;
// fetch xhr jsonp
this.params = {type: 'fetch'};
}
async load() {
if (this.options === null) {
// todo saniac 报错提示修改
throw new Error(
'modelGonfig in loadGraphModel() cannot be null. Please provide a url ' +
'or an IOHandler that loads the model');
}
const model = new Loader(this.options.urlConf, this.options.options);
await model.load();
this.preGraph(model);
return this;
}
preGraph (artifacts) {
let that = this;
const graph = new Graph(that.options);
that.graph = graph;
that.graph.data = artifacts.data;
const opsMap = that.graph.createOpsMap(that.graph.data.ops, that.graph.data.vars);
that.graph.weightMap = that.graph.constructOpsMap(opsMap);
}
/**
* Executes inference for the model for given input tensors.
* @param inputs
* @param outputs
* @returns {*}
*/
execute(inputs) {
debugger;
let that = this;
this.feed = this.graph.feed = inputs;
// 生成op数据
if (!this.graph.isExecuted) {
this.graph.weightMap.forEach(op => {
const type = op.type;
if (type !== 'feed' && type !== 'fetch') {
console.log(op.type);
that.graph.buildOpData(op);
}
});
}
this.graph.execute(inputs);
return this.graph.inst;
}
updateFeed() {
this.graph.feedItem.data = this.graph.feed.input[0].data;
// Utils.img2texture(this.graph.feedItem);
}
/**
* dispose
*/
dispose() {
this.graph.dispose();
}
}
/* eslint-enable */
/* eslint-disable */
import Gpu from '../gpu/gpu';
import getMaxUniforms from '../test/getMaxUniforms';
/**
* @file gpu运行时
* @author wangqun@baidu.com, yangmingming@baidu.com
*
*/
export default {
/**
* 初始化, 生成gpu实例
* @param {Object} opts 运行时参数,包含el:canvas,dim: 256
* @return {Object} this 实例对象
*/
init(opts = {}) {
const gpu = this.gpu = new Gpu(opts);
if (gpu.isFloatingTexture()) {
return this;
} else {
return null;
}
},
getWebglVersion() {
return this.gpu.getWebglVersion();
},
run(opName, opData, isRendered) {
// console.dir(['fscode', opData.fsCode]);
// let time = +Date.now();
// let start = time;
// let timeObj = {};
if (!opData.isPass) {
console.log('跳过当前op:' + opName);
return this;
}
// 设置gpu参数
const gpu = this.gpu;
gpu.setOutProps(opData.tensor['out']);
// 生成帧缓存材质
gpu.attachFrameBuffer(opData.iLayer);
// let end = +Date.now();
let bufferStatus = gpu.frameBufferIsComplete();
if (bufferStatus.isComplete) {
// start = +Date.now();
// timeObj['buferstatus-time'] = start - end;
// gpu.attachShader(opData.fshader);
gpu.setProgram(opData.program, isRendered);
// end = +Date.now();
// timeObj['createshader-time'] = end - start;
// timeObj['jsTime'] = end - time;
// statistic.push(timeObj);
// 开始计算
this.gpu.render(opData.renderData, opData.iLayer, isRendered);
return this;
} else {
return bufferStatus.message;
}
},
/**
* 读取op计算结果, 并返回数据
*/
read2() {
let bufferStatus = this.gpu.frameBufferIsComplete();
if (bufferStatus.isComplete) {
return this.gpu.compute();
}
return null;
},
async read() {
const pbo = this.gpu.createPBO();
await this.gpu.createAndWaitForFence();
// log.end('运行耗时');
// log.start('后处理');
// 其实这里应该有个fetch的执行调用或者fetch的输出
// log.start('后处理-读取数据');
// 开始读数据
return this.gpu.downloadFoat32TensorFromBuffer(pbo);
},
createProgram(fsCode, outTensor) {
const fshader = this.gpu.initShader(fsCode, 'fragment');
const program = this.gpu.createProgram(fshader, outTensor);
// test uniforms的个数
// const maxUniforms = getMaxUniforms(this.gpu.gl, program);
// alert(maxUniforms.maxFragmentShader);
// console.table(maxUniforms.uniforms);
return program;
},
// 释放资源
dispose() {
this.gpu.dispose();
}
};
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
*/
export default `
// 激活函数
float prelu(float x, float p, float b) {
float result = x;
if (x < 0.0) {
result = x * p;
}
return result;
}
float relu6(float x, float threshold, float b) {
float result = max(0.0,x);
result = min(result,threshold);
return result;
}
float leakyRelu(float x, float p, float b) {
float result = max(x, x * p);
return result;
}
float scale(float x, float p, float b) {
float result = p * x + b;
return result;
}
float sigmoid(float x, float y, float z) {
float result = 1.0 / (1.0 + exp(-x));
return result;
}
float softmax(float x, float p, float b) {
float result = exp(x) / (10.0 * exp(x));
return result;
}
`;
/* eslint-disable */
/**
* @file 公共参数
* @author yangmingming
*/
export default `
// dynamic的input数据
const float multi_value = float(MULTI_VALUE);
const float bias_value = float(BIAS_VALUE);
// 输出数据
const int width_shape_out = WIDTH_SHAPE_OUT;
const int height_shape_out = HEIGHT_SHAPE_OUT;
const int width_texture_out = WIDTH_TEXTURE_OUT;
const int height_texture_out = HEIGHT_TEXTURE_OUT;
const int channel_out = CHANNEL_OUT;
const int offset_y_out = OFFSET_Y_OUT;
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
*
*/
export default `
int getArrayIndexFromTensorPos_TENSOR_NAME(TENSOR_TYPE tensorPos) {
int index = 0;
for (int i = 0; i < length_shape_TENSOR_NAME; i++) {
index += tensorPos[i] * numbers_shape_TENSOR_NAME[i];
}
return index;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
*/
export default `
int getArrayIndexFromTexturePos_TEXTURE_NAME(vec3 pos) {
int x = int(floor(pos.x));
int y = int(floor(pos.y));
int d = int(floor(pos.z));
return (width_TEXTURE_NAME * y + x) * 4 + d;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
* @desc 获取输出tensor的坐标
*/
export default `
ivec4 getOutputTensorPos() {
// 获取原始长度
vec2 outCoord = moveTexture2PosToReal_texture_out(vCoord.xy);
// 材质体系转tensor体系坐标位置
int x = int(outCoord.x / float(channel_out));
int c = int(mod(outCoord.x, float(channel_out)));
int y = int(mod(outCoord.y, float(height_shape_out)));
int b = int(outCoord.y / float(height_shape_out));
return ivec4(b, c, y, x);
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
* desc 根据当前材质坐标位置获取值
*/
// 获取材质中的像素
export default `
#define getPixelsFromTexturePos_TEXTURE_NAME(pos) TEXTURE2D(TEXTURE_NAME, pos)
`;
/* eslint-disable */
/**
* @file 公共方法, 获取[H, W]的power的总和
* @author yangmingming
*/
export default `
float getRangePowSumFromArrayIndex_TEXTURE_NAME(int start, float p, float mean) {
float result = 0.0;
for (int i = 0; i < (width_shape_TENSOR_NAME * height_shape_TENSOR_NAME); i++) {
vec3 pos = getTexturePosFromArrayIndex_TEXTURE_NAME(i + start);
result += pow(getValueFromTexturePos_TEXTURE_NAME(pos) - mean, p);
}
return result;
}
`;
/* eslint-disable */
/**
* @file 公共方法, 获取[H, W]的总和
* @author yangmingming
*/
export default `
float getRangeSumFromArrayIndex_TEXTURE_NAME(int start) {
float result = 0.0;
for (int i = 0; i < (width_shape_TENSOR_NAME * height_shape_TENSOR_NAME); i++) {
vec3 pos = getTexturePosFromArrayIndex_TEXTURE_NAME(i + start);
result += getValueFromTexturePos_TEXTURE_NAME(pos);
}
return result;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
*/
// TENSOR_NAME, tensor name
// 获取数组元素索引为N的元素,在tensor上的坐标ivec4(batch, channel, height, width)
export default `
iTENSOR_TYPE getTensorPosFromArrayIndex_TENSOR_NAME(int n) {
iTENSOR_TYPE pos;
pos[0] = n / numbers_shape_TENSOR_NAME[0];
for (int i = 1; i < length_shape_TENSOR_NAME; i++) {
n = int(mod(float(n), float(numbers_shape_TENSOR_NAME[i - 1])));
pos[i] = n / numbers_shape_TENSOR_NAME[i];
}
return pos;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
*/
// TEXTURE_NAME, 材质名称
// WIDTH_TEXTURE_NAME_VALUE, 材质宽度
// HEIGHT_TEXTURE_NAME_VALUE, 材质高度
// 获取数组元素索引为N的元素,在texture上的坐标
// const int width_TEXTURE_NAME = WIDTH_TEXTURE_NAME_VALUE;
// const int height_TEXTURE_NAME = HEIGHT_TEXTURE_NAME_VALUE;
export default `
vec3 getTexturePosFromArrayIndex_TEXTURE_NAME(int n) {
vec3 pos;
pos.z = mod(float(n), 4.0);
n /= 4;
int y = n / width_TEXTURE_NAME;
float width = float(width_TEXTURE_NAME);
float x = mod(float(n), width);
pos.x = x / width;
pos.y = float(y) / float(height_TEXTURE_NAME);
return pos;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
* desc 根据tensor坐标获取这个tensor位置的值
*/
export default `
// 根据tensor坐标获取这个tensor位置的值
float getValueFromTensorPos_TENSOR_NAME(int r, int g, int b, int a) {
vec4 pixels = TEXTURE2D(texture_TENSOR_NAME,
vec2(
(float(a * channel_TENSOR_NAME + g) + 0.5) / float(width_texture_TENSOR_NAME),
(float(r * height_shape_TENSOR_NAME + b) + 0.5) / float(height_texture_TENSOR_NAME)
)
);
// 只用了r通道
return pixels.r;
}
// 紧凑型布局根据tensor坐标获取这个tensor位置的值
float getValueFromTensorPosLimit_TENSOR_NAME(int r, int g, int b, int a) {
float halfW = ceil(float(width_shape_TENSOR_NAME) / 2.0);
int x = int(mod(float(a), halfW));
int offsetY = 0;
if (a > x) {
offsetY = height_shape_TENSOR_NAME;
}
vec4 pixels = TEXTURE2D(texture_TENSOR_NAME,
vec2(
(float(x * channel_TENSOR_NAME + g) + 0.5) / float(width_texture_TENSOR_NAME),
(float(r * 2 * height_shape_TENSOR_NAME + b + offsetY) + 0.5) / float(height_texture_TENSOR_NAME)
)
);
return pixels.r;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
* desc packed布局 根据tensor坐标获取这个tensor位置的值
*/
export default `
float getValueFromTensorPosPacked_TENSOR_NAME(int r, int g, int b, int a) {
int y = b / 2;
int yOffset = int(mod(float(b), 2.0));
int x = a / 2;
int xOffset = int(mod(float(a), 2.0));
int height = height_shape_TENSOR_NAME + offset_y_TENSOR_NAME;
vec4 pixels = TEXTURE2D(texture_TENSOR_NAME, vec2((float(x) + 0.5) / float(width_texture_TENSOR_NAME), (float(g * height / 2 + y) + 0.5) / float(height_texture_TENSOR_NAME)));
int index = 0;
if (xOffset == 0 && yOffset == 0) {
return pixels[0];
}
else if (xOffset == 1 && yOffset == 0) {
return pixels[1];
}
else if (xOffset == 0 && yOffset == 1) {
return pixels[2];
}
return pixels[3];
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
* desc 根据材质坐标获取这个材质位置的值
*/
// TEXTURE_NAME, tensor name
// 获取材质中的数据
// uniform sampler2D TEXTURE_NAME;
export default `
float getValueFromTexturePos_TEXTURE_NAME(vec3 pos) {
vec4 pixels = TEXTURE2D(TEXTURE_NAME, pos.xy);
int d = int(pos.z);
if (d == 0) {
return pixels.r;
} else if (d == 1) {
return pixels.g;
} else if (d == 2) {
return pixels.b;
}
return pixels.a;
}
`;
/* eslint-disable */
/**
* @file 公共方法
* @author yangmingming
* desc 坐标转化
*/
// TEXTURE_NAME, 材质name
// 材质坐标转化成真实尺寸坐标
export default `
// vec2 moveTexture2PosToReal_TEXTURE_NAME(vec2 v) {
// return v * _2d_shape_TEXTURE_NAME;
// // vec2 v2;
// // v2.x = v.x * float(width_TEXTURE_NAME);
// // v2.y = v.y * float(height_TEXTURE_NAME);
// // return v2;
// }
vec2 moveTexture2PosToReal_TEXTURE_NAME(vec2 v) {
vec2 v2;
v2.x = v.x * float(width_TEXTURE_NAME);
v2.y = v.y * float(height_TEXTURE_NAME);
return v2;
}
`;
/* eslint-disable */
/**
* @file 预设条件
* @author yangmingming
*/
export default `
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
precision highp int;
#else
precision mediump float;
precision mediump int;
#endif
varying vec2 vCoord;
void setOutput(float result) {
gl_FragColor.r = result;
}
`;
/* eslint-disable */
/**
* @file 预设条件, webgl 2.0版本
* @author yangmingming
*/
export default `#version 300 es
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
precision highp int;
#else
precision mediump float;
precision mediump int;
#endif
// 顶点shader透传的材质坐标
in vec2 vCoord;
out vec4 outColor;
void setOutput(float result) {
outColor.r = result;
}
`;
/* eslint-disable */
/**
* @file 激活函数
* @author yangmingming
*/
// 激活函数
export default `
float prelu(float x, float p, float b) {
float result = x;
if (x < 0.0) {
result = x * p;
}
return result;
}
`;
/* eslint-disable */
/**
* @file 激活函数
* @author wangqun@baidu.com
*/
export default `
float scale(float x, float p, float b) {
float result = p * x + b;
return result;
}
`;
/* eslint-disable */
/**
* @file 激活函数
* @author yangmingming
*/
// 激活函数
export default `
float sigmoid(float x, float y, float z) {
float result = 1.0 / (1.0 + exp(-x));
return result;
}
`;
/* eslint-disable */
/**
* @file softmax激活函数
* @author wangqun
*/
export default `
float softmax(float x, float p, float b) {
float result = x;
if (x < 0.0) {
result = x * p;
}
return result;
}
`;
/* eslint-disable */
/**
* @file 公共方法-尾部, 方法1: 获取输出坐标
* @author yangmingming
*/
export default `
vec2 _2d_shape_texture_out = vec2(float(width_texture_out), float(height_texture_out));
ivec4 getOutputTensorPos() {
// 获取原始长度
vec2 outCoord = vCoord.xy * _2d_shape_texture_out;
int x = int(outCoord.x / float(channel_out));
int c = int(mod(outCoord.x, float(channel_out)));
int y = int(mod(outCoord.y, float(height_shape_out)));
int b = int(outCoord.y / float(height_shape_out));
return ivec4(b, c, y, x);
}
ivec4 getOutputTensorPosLimit() {
// 获取原始长度
vec2 outCoord = vCoord.xy * _2d_shape_texture_out;
float offsetY = floor(outCoord.y / float(height_shape_out));
int x = int(outCoord.x / float(channel_out));
if (mod(offsetY, 2.0) > 0.0) {
x += int(ceil(float(width_shape_out) / 2.0));
}
int y = int(mod(outCoord.y, float(height_shape_out)));
int c = int(mod(outCoord.x, float(channel_out)));
int b = int(outCoord.y / float(2 * height_shape_out));
return ivec4(b, c, y, x);
}
ivec4 getOutputPackedTensorPos() {
// 获取原始长度
vec2 outCoord = vCoord.xy * _2d_shape_texture_out;
int height = height_shape_out + offset_y_out;
int x = int(outCoord.x);
int c = int(outCoord.y / float(height / 2));
int y = int(mod(outCoord.y, float(height / 2)));
int b = 0;
return ivec4(b, c, y, x);
}
`;
/* eslint-disable */
/**
* @file 新增类型
* @author yangmingming
*/
// 扩展shader的ivec类型
export default `
struct ivec5 {
int x;
int y;
int z;
int w;
int u;
};
struct ivec6 {
int x;
int y;
int z;
int w;
int u;
int v;
};
`;
/* eslint-disable */
/**
* @file batchnorm的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
},
{
func: 'getPixelsFromTexturePos',
conf: {
TEXTURE_NAME: 'texture_scale'
}
}
],
conf: [
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'TOTAL_SHAPE_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'EPSILON',
'WIDTH_TEXTURE_SCALE',
'HEIGHT_TEXTURE_SCALE',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
{
tensor: 'scale',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
\ No newline at end of file
/* eslint-disable */
/**
* @file batchnorm主函数
* @author wangqun
*/
export default `
// start函数
void main(void) {
// 输出数据
ivec4 oPos = getOutputTensorPos();
float o = getValueFromTensorPos_origin(oPos.r, oPos.g, oPos.b, oPos.a);
// 归一化数据
vec4 scale = getPixelsFromTexturePos_texture_scale(vec2((float(int(oPos.g)) + 0.5) / float(width_texture_scale), 0.0));
float x = (o - scale[3]) / sqrt(scale[2] + epsilon);
float res = scale[0] * x + scale[1];
setOutput(res);
}
`;
\ No newline at end of file
/* eslint-disable */
/**
* @file batchnorm参数文件
* @author yangmingming
*/
export default `
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
const int total_shape_origin = TOTAL_SHAPE_ORIGIN;
// 计算数据
const float epsilon = float(EPSILON);
const int width_texture_scale = WIDTH_TEXTURE_SCALE;
const int height_texture_scale = HEIGHT_TEXTURE_SCALE;
// 输入数据
uniform sampler2D texture_origin;
uniform sampler2D texture_scale;
`;
\ No newline at end of file
/* eslint-disable */
/**
* @file conv2d的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
},
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'filter'
}
}
],
conf: [
'LENGTH_SHAPE_FILTER',
'WIDTH_SHAPE_FILTER',
'HEIGHT_SHAPE_FILTER',
'WIDTH_TEXTURE_FILTER',
'HEIGHT_TEXTURE_FILTER',
'CHANNEL_FILTER',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'STRIDE_HORIZONTAL',
'STRIDE_VERTICAL',
'PAD_LEFT',
'PAD_TOP',
'DILATION_HORIZONTAL',
'DILATION_VERTICAL',
'GROUPS',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
// {
// tensor: 'filter',
// variable: 'numbers_shape',
// setter: 'uniform1iv',
// type: 'uniform'
// },
{
tensor: 'filter',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
// {
// tensor: 'origin',
// variable: 'numbers_shape',
// setter: 'uniform1iv',
// type: 'uniform'
// },
// {
// tensor: 'out',
// variable: 'numbers_shape',
// setter: 'uniform1iv',
// type: 'uniform'
// }
]
};
/* eslint-disable */
/**
* @file 主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
ivec4 oPos = getOutputTensorPosLIMIT_OUT();
int x = oPos.a;
int c = oPos.g;
int y = oPos.b;
int b = oPos.r;
float res = 0.0;
// 获取output的坐标
int oTensorChannel = (c / (channel_out / groups)) * channel_filter;
int oy = y * stride_v - padTop;
for (int fy = 0; fy < height_shape_filter; fy++) {
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
oy += dilation_v;
continue;
}
int ox = x * stride_h - padLeft;
for (int fx = 0; fx < width_shape_filter; fx++) {
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
ox += dilation_h;
continue;
}
// channel计算
for (int j = 0; j < channel_filter; j++) {
float f = getValueFromTensorPosLIMIT_FILTER_filter(c, j, fy, fx);
float o = getValueFromTensorPosLIMIT_ORIGIN_origin(b, oTensorChannel + j, oy, ox);
res += f * o;
}
ox += dilation_h;
}
oy += dilation_v;
}
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file 参数文件
* @author yangmingming
*/
export default `
// conv2d的input数据
// 常量
// 卷积核
const int length_shape_filter = LENGTH_SHAPE_FILTER;
const int width_shape_filter = WIDTH_SHAPE_FILTER;
const int height_shape_filter = HEIGHT_SHAPE_FILTER;
const int width_texture_filter = WIDTH_TEXTURE_FILTER;
const int height_texture_filter = HEIGHT_TEXTURE_FILTER;
const int channel_filter = CHANNEL_FILTER;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// dilation膨胀系数
const int dilation_h = DILATIONS_X;
const int dilation_v = DILATIONS_Y;
// groups
const int groups = GROUPS;
// uniform变量
// 卷积核
uniform sampler2D texture_filter;
// 输入数据
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file conv2d的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
},
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'filter'
}
}
],
conf: [
'LENGTH_SHAPE_FILTER',
'WIDTH_SHAPE_FILTER',
'HEIGHT_SHAPE_FILTER',
'WIDTH_TEXTURE_FILTER',
'HEIGHT_TEXTURE_FILTER',
'CHANNEL_FILTER',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'STRIDE_HORIZONTAL',
'STRIDE_VERTICAL',
'PAD_LEFT',
'PAD_TOP',
'DILATION_HORIZONTAL',
'DILATION_VERTICAL',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
{
tensor: 'filter',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file 主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
ivec4 oPos = getOutputTensorPosLIMIT_OUT();
int x = oPos.a;
int c = oPos.g;
int y = oPos.b;
int b = oPos.r;
float res = 0.0;
int top = y * stride_v - padTop;
int left = x * stride_h - padLeft;
for (int fy = 0; fy < height_shape_filter; fy++) {
int oy = top + fy * dilation_v;
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
continue;
}
for (int fx = 0; fx < width_shape_filter; fx++) {
int ox = left + fx * dilation_h;
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
continue;
}
// b默认是0
float f = getValueFromTensorPosLIMIT_FILTER_filter(c, 0, fy, fx);
float o = getValueFromTensorPosLIMIT_ORIGIN_origin(b, c, oy, ox);
res += f * o;
}
}
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file 参数文件
* @author yangmingming
*/
export default `
// conv2d的input数据
// 常量
// 卷积核
const int length_shape_filter = LENGTH_SHAPE_FILTER;
const int width_shape_filter = WIDTH_SHAPE_FILTER;
const int height_shape_filter = HEIGHT_SHAPE_FILTER;
const int width_texture_filter = WIDTH_TEXTURE_FILTER;
const int height_texture_filter = HEIGHT_TEXTURE_FILTER;
const int channel_filter = CHANNEL_FILTER;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// dilation膨胀系数
const int dilation_h = DILATIONS_X;
const int dilation_v = DILATIONS_Y;
// uniform变量
// 卷积核
uniform sampler2D texture_filter;
// 输入数据
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file conv2d-elementwise_add的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
},
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'filter'
}
}
],
conf: [
'LENGTH_SHAPE_FILTER',
'WIDTH_SHAPE_FILTER',
'HEIGHT_SHAPE_FILTER',
'WIDTH_TEXTURE_FILTER',
'HEIGHT_TEXTURE_FILTER',
'CHANNEL_FILTER',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'WIDTH_SHAPE_COUNTER',
'STRIDE_HORIZONTAL',
'STRIDE_VERTICAL',
'PAD_LEFT',
'PAD_TOP',
'DILATION_HORIZONTAL',
'DILATION_VERTICAL',
'GROUPS',
'AXIS',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
{
tensor: 'filter',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'counter',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file 主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
ivec4 oPos = getOutputTensorPosLIMIT_OUT();
int x = oPos.a;
int c = oPos.g;
int y = oPos.b;
int b = oPos.r;
int addAxis = oPos[axis];
float res = getValueFromCounter(addAxis);
// 获取output的坐标
int oTensorChannel = (c / (channel_out / groups)) * channel_filter;
int oy = y * stride_v - padTop;
for (int fy = 0; fy < height_shape_filter; fy++) {
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
oy += dilation_v;
continue;
}
int ox = x * stride_h - padLeft;
for (int fx = 0; fx < width_shape_filter; fx++) {
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
ox += dilation_h;
continue;
}
// channel计算
for (int j = 0; j < channel_filter; j++) {
float f = getValueFromTensorPosLIMIT_FILTER_filter(c, j, fy, fx);
float o = getValueFromTensorPosLIMIT_ORIGIN_origin(b, oTensorChannel + j, oy, ox);
res += f * o;
}
ox += dilation_h;
}
oy += dilation_v;
}
setOutput(ACTIVE_FUNCTION(res, multi_value, bias_value));
// outColor.r = float(b);
// outColor.g = float(c);
// outColor.b = float(y);
// outColor.a = float(x);
}
`;
/* eslint-disable */
/**
* @file 参数文件
* @author yangmingming
*/
export default `
// 卷积核
const int length_shape_filter = LENGTH_SHAPE_FILTER;
const int width_shape_filter = WIDTH_SHAPE_FILTER;
const int height_shape_filter = HEIGHT_SHAPE_FILTER;
const int width_texture_filter = WIDTH_TEXTURE_FILTER;
const int height_texture_filter = HEIGHT_TEXTURE_FILTER;
const int channel_filter = CHANNEL_FILTER;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// dilation膨胀系数
const int dilation_h = DILATIONS_X;
const int dilation_v = DILATIONS_Y;
// groups
const int groups = GROUPS;
// 加法
const int axis = AXIS;
// uniform变量
// 卷积核
uniform sampler2D texture_filter;
// 输入数据
uniform sampler2D texture_origin;
// 加法
uniform sampler2D texture_counter;
// 加法用到的函数
float getValueFromCounter(int index) {
float xPos = float(index) / float(WIDTH_SHAPE_COUNTER);
vec4 pixels = TEXTURE2D(texture_counter, vec2(xPos, 0.5));
return pixels.r;
}
`;
/* eslint-disable */
/**
* @file conv2d-elementwise_add的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
},
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'filter'
}
}
],
conf: [
'LENGTH_SHAPE_FILTER',
'WIDTH_SHAPE_FILTER',
'HEIGHT_SHAPE_FILTER',
'WIDTH_TEXTURE_FILTER',
'HEIGHT_TEXTURE_FILTER',
'CHANNEL_FILTER',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'TOTAL_SHAPE_COUNTER',
'PAD_LEFT',
'PAD_TOP',
'AXIS',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
{
tensor: 'filter',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'counter',
variable: 'data',
setter: 'uniform1fv',
type: 'uniform'
}
]
};
/* eslint-disable */
/**
* @file 主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
ivec4 oPos = getOutputPackedTensorPos();
int x = oPos.a;
int c = oPos.g;
int y = oPos.b;
int b = oPos.r;
// b = 0;
// c = 1;
// y = 0;
// x = 0;
int addAxis = oPos[axis];
float res = getValueFromCounter(addAxis);
// 输出结果
vec4 v4 = vec4(res);
float I[16];
float B[16];
float T[16];
float f[16];
for (int cl = 0; cl < channel_filter; cl++) {
// 获取output的坐标
int oy = 2*y - padTop;
// 计算输入 4 * 4矩阵 和filter
for (int fy = 0; fy < 4; fy++) {
int ox = 2*x - padLeft;
int index = fy * 4;
for (int fx = 0; fx < 4; fx++) {
if (oy < 0 || oy >= height_shape_origin || ox >= width_shape_origin || ox < 0) {
I[index + fx] = 0.0;
} else {
I[index + fx] = getValueFromTensorPos_origin(b, cl, oy, ox);
}
f[index + fx] = getValueFromTensorPos_filter(c, cl, fy, fx);
ox += 1;
}
oy += 1;
}
// input转化
float tmp1 = I[2] - I[10];
float tmp2 = I[9] - I[1];
B[0] = I[0] - I[8] - tmp1;
B[1] = tmp1 - tmp2;
B[2] = tmp1 + tmp2;
B[3] = I[3] - I[11] + tmp2;
tmp1 = I[6] + I[10];
tmp2 = I[5] + I[9];
B[4] = I[4] + I[8] - tmp1;
B[5] = tmp1 + tmp2;
B[6] = tmp1 - tmp2;
B[7] = I[7] + I[11] - tmp2;
tmp1 = I[10] - I[6];
tmp2 = I[5] - I[9];
B[8] = I[8] - I[4] - tmp1;
B[9] = tmp1 - tmp2;
B[10] = tmp1 + tmp2;
B[11] = tmp2 - I[7] + I[11];
tmp1 = I[14] - I[6];
tmp2 = I[5] - I[13];
B[12] = I[12] - I[4] - tmp1;
B[13] = tmp1 - tmp2;
B[14] = tmp1 + tmp2;
B[15] = tmp2 - I[7] + I[15];
// 点乘
for (int i = 0; i < 16; i++) {
T[i] = B[i] * f[i];
}
// final output
tmp1 = T[1] + T[5] + T[9];
tmp2 = T[2] + T[6] + T[10];
v4[0] += T[0] + T[4] + T[8] + tmp1 + tmp2;
v4[1] += T[3] + T[7] + T[11] + tmp1 - tmp2;
tmp1 = T[5] - T[9] + T[13];
tmp2 = T[6] - T[10] + T[14];
v4[2] += T[4] - T[8] + T[12] + tmp1 + tmp2;
v4[3] += T[7] - T[11] + T[15] + tmp1 - tmp2;
}
outColor.r = ACTIVE_FUNCTION(v4[0], multi_value, bias_value);
outColor.g = ACTIVE_FUNCTION(v4[1], multi_value, bias_value);
outColor.b = ACTIVE_FUNCTION(v4[2], multi_value, bias_value);
outColor.a = ACTIVE_FUNCTION(v4[3], multi_value, bias_value);
// outColor = v4;
// outColor.r = I[0];
// outColor.g = I[1];
// outColor.b = I[2];
// outColor.a = I[3];
// outColor.r = float(b);
// outColor.g = float(c);
// outColor.b = float(y);
// outColor.a = float(x);
}
`;
/* eslint-disable */
/**
* @file 参数文件
* @author yangmingming
*/
export default `
// 卷积核
const int length_shape_filter = LENGTH_SHAPE_FILTER;
const int width_shape_filter = WIDTH_SHAPE_FILTER;
const int height_shape_filter = HEIGHT_SHAPE_FILTER;
const int width_texture_filter = WIDTH_TEXTURE_FILTER;
const int height_texture_filter = HEIGHT_TEXTURE_FILTER;
const int channel_filter = CHANNEL_FILTER;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// 加法
const int axis = AXIS;
uniform float data_counter[TOTAL_SHAPE_COUNTER];
// uniform变量
// 卷积核
uniform sampler2D texture_filter;
// 输入数据
uniform sampler2D texture_origin;
// 加法用到的函数
float getValueFromCounter(int index) {
for (int i = 0; i < TOTAL_SHAPE_COUNTER; i++) {
if (i == index) {
return data_counter[i];
}
}
return 0.0;
}
`;
/* eslint-disable */
/**
* @file dynamic的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getPixelsFromTexturePos',
conf: {
TEXTURE_NAME: 'texture_origin'
}
}
],
conf: [
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file 主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
// 输出数据
float o = getPixelsFromTexturePos_texture_origin(vCoord).r;
float res = ACTIVE_FUNCTION(o, multi_value, bias_value);
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file 参数文件
* @author yangmingming
*/
export default `
// 输入数据
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file 加法的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getPixelsFromTexturePos',
conf: {
TEXTURE_NAME: 'texture_origin'
}
},
{
func: 'getPixelsFromTexturePos',
conf: {
TEXTURE_NAME: 'texture_counter'
}
}
],
conf: [
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'TOTAL_SHAPE_COUNTER',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'AXIS',
'MULTI_VALUE',
'BIAS_VALUE',
'ACTIVE_FUNCTION'
],
input: [
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'counter',
variable: 'data',
setter: 'uniform1fv',
type: 'uniform'
}
]
};
/* eslint-disable */
/**
* @file 加法主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
// 输出数据
ivec4 oPos = getOutputTensorPosLIMIT_OUT();
int index = oPos[axis];
float o = getPixelsFromTexturePos_texture_origin(vCoord).r;
float c = getValueFromCounter(index);
float res = ACTIVE_FUNCTION(o + c, multi_value, bias_value);
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file 加法参数
* @author yangmingming
*/
export default `
// 输入数据
const int axis = AXIS;
// const int total_shape_counter = TOTAL_SHAPE_COUNTER;
uniform float data_counter[TOTAL_SHAPE_COUNTER];
uniform sampler2D texture_origin;
float getValueFromCounter(int index) {
for (int i = 0; i < TOTAL_SHAPE_COUNTER; i++) {
if (i == index) {
return data_counter[i];
}
}
return 0.0;
}
`;
graph.es6/* eslint-disable */
/**
* @file mul的配置文件
* @author yangmingming zhangmiao06
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'counter'
}
},
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
}
],
conf: [
'LENGTH_SHAPE_COUNTER',
'WIDTH_SHAPE_COUNTER',
'HEIGHT_SHAPE_COUNTER',
'WIDTH_TEXTURE_COUNTER',
'HEIGHT_TEXTURE_COUNTER',
'CHANNEL_COUNTER',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT'
],
input: [
{
tensor: 'counter',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
},
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file mul主函数
*/
export default `
// start函数
void main(void) {
float res = 0.0;
// 获取output的坐标
ivec4 out_pos = getOutputTensorPos();
for (int j = 0; j < width_shape_origin; j++) {
float c = getValueFromTensorPos_counter(out_pos[0], out_pos[1], j, out_pos[3]);
float o = getValueFromTensorPos_origin(out_pos[0], out_pos[1], out_pos[2], j);
res += c * o;
}
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file mul参数文件
*/
export default `
// mul的input数据
// 常量
// 输入数据
const int length_shape_counter = LENGTH_SHAPE_COUNTER;
const int width_shape_counter = WIDTH_SHAPE_COUNTER;
const int height_shape_counter = HEIGHT_SHAPE_COUNTER;
const int width_texture_counter = WIDTH_TEXTURE_COUNTER;
const int height_texture_counter = HEIGHT_TEXTURE_COUNTER;
const int channel_counter = CHANNEL_COUNTER;
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// uniform变量
// 输入数据
uniform sampler2D texture_counter;
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file pool2d的配置文件
* @author yangmingming zhangmiao06
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
}
],
conf: [
'KSIZE_X',
'KSIZE_Y',
'TYPE_POOL',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'STRIDES_X',
'STRIDES_Y',
'PADDING_X',
'PADDING_Y'
],
input: [
// texture类型,若添加from: 'prev', 表示读取上一个op的产出
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file pool2d主函数
*/
export default `
// start函数
void main(void) {
float res = (-1.0 / exp(-20.0));
// 获取output的坐标
ivec4 out_pos = getOutputTensorPosLIMIT_OUT();
// X、Y方向的移动步长
int count_pool = 0;
int oy_base = out_pos[2] * stride_v - padTop;
int ox_base = out_pos[3] * stride_h - padLeft;
for (int fy = 0; fy < height_shape_pool; fy++) {
int oy = oy_base + fy;
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
continue;
}
for (int fx = 0; fx < width_shape_pool; fx++) {
int ox = ox_base + fx;
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
continue;
}
// origin数据
float curr = getValueFromTensorPosLIMIT_ORIGIN_origin(out_pos[0], out_pos[1], oy, ox);
if (type_pool == 1) {
if (curr > res) {
res = curr;
}
} else {
res += curr;
// 在平均池化模式忽略填充值(exclusive默认为true)
count_pool++;
}
}
}
if (type_pool != 1) {
res = res / float(count_pool);
}
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file pool2d参数文件
*/
export default `
// 常量
// 池化大小
const int width_shape_pool = KSIZE_X;
const int height_shape_pool = KSIZE_Y;
const int type_pool = TYPE_POOL;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// uniform变量
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file pool2d_avg的配置文件
* @author yangmingming zhangmiao06
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
}
],
conf: [
'KSIZE_X',
'KSIZE_Y',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'STRIDES_X',
'STRIDES_Y',
'PADDING_X',
'PADDING_Y'
],
input: [
// texture类型,若添加from: 'prev', 表示读取上一个op的产出
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file pool2d_avg主函数
* @author yangmingming zhangmiao06
*/
export default `
// start函数
void main(void) {
float res = 0.0;
// 获取output的坐标
ivec4 out_pos = getOutputTensorPos();
// X、Y方向的移动步长
int oy_base = out_pos[2] * stride_v - padTop;
int ox_base = out_pos[3] * stride_h - padLeft;
for (int fy = 0; fy < height_shape_pool; fy++) {
int oy = oy_base + fy;
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
continue;
}
for (int fx = 0; fx < width_shape_pool; fx++) {
int ox = ox_base + fx;
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
continue;
}
// origin数据
float curr = getValueFromTensorPos_origin(out_pos[0], out_pos[1], oy, ox);
res += curr;
// 在平均池化模式忽略填充值(exclusive默认为true)
}
}
res = res / float(height_shape_pool * width_shape_pool);
setOutput(res);
}
`;
/* eslint-disable */
/**
* @file pool2d_avg参数文件
* @author yangmingming zhangmiao06
*/
export default `
// 常量
// 池化大小
const int width_shape_pool = KSIZE_X;
const int height_shape_pool = KSIZE_Y;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// uniform变量
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file pool2d的配置文件
* @author yangmingming zhangmiao06
*/
export default {
dep: [
{
func: 'getValueFromTensorPos',
conf: {
TENSOR_NAME: 'origin'
}
}
],
conf: [
'KSIZE_X',
'KSIZE_Y',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'STRIDES_X',
'STRIDES_Y',
'PADDING_X',
'PADDING_Y'
],
input: [
// texture类型,若添加from: 'prev', 表示读取上一个op的产出
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file pool2d主函数
*/
export default `
// start函数
void main(void) {
float res = (-1.0 / exp(-20.0));
// 获取output的坐标
ivec4 out_pos = getOutputTensorPosLIMIT_OUT();
int b = out_pos[0];
int c = out_pos[1];
int y = out_pos[2];
int x = out_pos[3];
// X、Y方向的移动步长
int oy_base = out_pos[2] * stride_v - padTop;
int ox_base = out_pos[3] * stride_h - padLeft;
for (int fy = 0; fy < height_shape_pool; fy++) {
int oy = oy_base + fy;
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
continue;
}
for (int fx = 0; fx < width_shape_pool; fx++) {
int ox = ox_base + fx;
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
continue;
}
// origin数据
float curr = getValueFromTensorPosLIMIT_ORIGIN_origin(out_pos[0], out_pos[1], oy, ox);
res = max(res, curr);
}
}
setOutput(res);
// outColor.r = float(b);
// outColor.g = float(c);
// outColor.b = float(y);
// outColor.a = float(x);
}
`;
/* eslint-disable */
/**
* @file pool2d参数文件
*/
export default `
// 常量
// 池化大小
const int width_shape_pool = KSIZE_X;
const int height_shape_pool = KSIZE_Y;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// uniform变量
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file pool2d的配置文件
* @author yangmingming zhangmiao06
*/
export default {
dep: [
{
func: 'getValueFromTensorPosPacked',
conf: {
TENSOR_NAME: 'origin'
}
}
],
conf: [
'KSIZE_X',
'KSIZE_Y',
'TYPE_POOL',
'WIDTH_SHAPE_ORIGIN',
'HEIGHT_SHAPE_ORIGIN',
'LENGTH_SHAPE_ORIGIN',
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'CHANNEL_ORIGIN',
'OFFSET_X_ORIGIN',
'OFFSET_Y_ORIGIN',
'WIDTH_SHAPE_OUT',
'HEIGHT_SHAPE_OUT',
'WIDTH_TEXTURE_OUT',
'HEIGHT_TEXTURE_OUT',
'CHANNEL_OUT',
'OFFSET_Y_OUT',
'STRIDES_X',
'STRIDES_Y',
'PADDING_X',
'PADDING_Y'
],
input: [
// texture类型,若添加from: 'prev', 表示读取上一个op的产出
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file pool2d主函数
*/
export default `
// start函数
void main(void) {
float res = (-1.0 / exp(-20.0));
// 获取output的坐标
ivec4 out_pos = getOutputTensorPos();
// int b = out_pos[0];
// int c = out_pos[1];
// int y = out_pos[2];
// int x = out_pos[3];
// X、Y方向的移动步长
int count_pool = 0;
int oy_base = out_pos[2] * stride_v - padTop;
int ox_base = out_pos[3] * stride_h - padLeft;
// int offset = 0;
// vec4 v4 = texture(texture_origin, vec2((float(0) + 0.5) / float(width_texture_origin), (float(1 * height_shape_origin / 2 + 0) + 0.5) / float(height_texture_origin)));
for (int fy = 0; fy < height_shape_pool; fy++) {
int oy = oy_base + fy;
if (oy >= height_shape_origin) {
break;
}
if (oy < 0) {
continue;
}
for (int fx = 0; fx < width_shape_pool; fx++) {
int ox = ox_base + fx;
if (ox >= width_shape_origin) {
break;
}
if (ox < 0) {
continue;
}
// origin数据
float curr = getValueFromTensorPosPacked_origin(out_pos[0], out_pos[1], oy, ox);
// y = oy;
// x = ox;
// v4[offset++] = curr;
if (type_pool == 1) {
if (curr > res) {
res = curr;
}
} else {
res += curr;
// 在平均池化模式忽略填充值(exclusive默认为true)
count_pool++;
}
}
}
if (type_pool != 1) {
res = res / float(count_pool);
}
setOutput(res);
// outColor = v4;
// outColor.r = float(b);
// outColor.g = float(c);
// outColor.b = float(y);
// outColor.a = float(x);
}
`;
/* eslint-disable */
/**
* @file pool2d参数文件
*/
export default `
// 常量
// 池化大小
const int width_shape_pool = KSIZE_X;
const int height_shape_pool = KSIZE_Y;
const int type_pool = TYPE_POOL;
// 输入数据
const int width_shape_origin = WIDTH_SHAPE_ORIGIN;
const int height_shape_origin = HEIGHT_SHAPE_ORIGIN;
const int length_shape_origin = LENGTH_SHAPE_ORIGIN;
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int channel_origin = CHANNEL_ORIGIN;
const int offset_x_origin = OFFSET_X_ORIGIN;
const int offset_y_origin = OFFSET_Y_ORIGIN;
// 计算相关
// 拆分步长
const int stride_h = STRIDES_X;
const int stride_v = STRIDES_Y;
// padding的数目
const int padLeft = PADDINGS_X;
const int padTop = PADDINGS_Y;
// uniform变量
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file softmax的配置文件
* @author yangmingming
*/
export default {
dep: [
{
func: 'getPixelsFromTexturePos',
conf: {
TEXTURE_NAME: 'texture_origin'
}
}
],
conf: [
'WIDTH_TEXTURE_ORIGIN',
'HEIGHT_TEXTURE_ORIGIN',
'TOTAL_SHAPE_ORIGIN',
'OFFSET_Y_OUT'
],
input: [
{
tensor: 'origin',
variable: 'texture',
setter: 'initTexture',
type: 'texture'
}
]
};
/* eslint-disable */
/**
* @file softmax主函数
* @author yangmingming
*/
export default `
// start函数
void main(void) {
float res = 0.0;
vec4 v4 = getPixelsFromTexturePos_texture_origin(vCoord);
vec2 onePixel = vec2(1.0 / float(width_texture_origin), 1.0 / float(height_texture_origin));
float total = 0.0;
float maxValue = getPixelsFromTexturePos_texture_origin(onePixel).r;
int number = 0;
vec4 pixels;
vec4 result;
// 求最大
for (int i = 0; i < height_texture_origin; i++) {
for (int j = 0; j < width_texture_origin; j++) {
pixels = getPixelsFromTexturePos_texture_origin(onePixel * vec2(float(j), float(i)));
number = i * width_texture_origin + j;
if ((number * 4 + 1) < total_shape_origin) {
maxValue = max(pixels.r, maxValue);
}
if ((number * 4 + 2) < total_shape_origin) {
maxValue = max(pixels.g, maxValue);
}
if ((number * 4 + 3) < total_shape_origin) {
maxValue = max(pixels.b, maxValue);
}
if ((number * 4 + 4) < total_shape_origin) {
maxValue = max(pixels.a, maxValue);
}
}
}
// 求和
for (int i = 0; i < height_texture_origin; i++) {
for (int j = 0; j < width_texture_origin; j++) {
pixels = getPixelsFromTexturePos_texture_origin(onePixel * vec2(float(j), float(i)));
number = i * width_texture_origin + j;
if ((number * 4 + 1) < total_shape_origin) {
total += exp(pixels.r - maxValue);
}
if ((number * 4 + 2) < total_shape_origin) {
total += exp(pixels.g - maxValue);
}
if ((number * 4 + 3) < total_shape_origin) {
total += exp(pixels.b - maxValue);
}
if ((number * 4 + 4) < total_shape_origin) {
total += exp(pixels.a - maxValue);
}
}
}
outColor = exp(v4 - vec4(maxValue, maxValue, maxValue, maxValue)) / vec4(total, total, total, total);
// res = result.a;
// setOutput(res);
}
`;
/* eslint-disable */
/**
* @file softmax参数文件
* @author yangmingming
*/
export default `
// 输入数据
const int width_texture_origin = WIDTH_TEXTURE_ORIGIN;
const int height_texture_origin = HEIGHT_TEXTURE_ORIGIN;
const int total_shape_origin = TOTAL_SHAPE_ORIGIN;
// uniform变量
// 输入数据
uniform sampler2D texture_origin;
`;
/* eslint-disable */
/**
* @file 顶点文件
* @author wangqun
* @desc  顶点坐标系转换,适配webgl1
*/
export default `
attribute vec4 position;
varying vec2 vCoord;
void main() {
vCoord.x = (position.x + 1.0) / 2.0;
vCoord.y = (position.y + 1.0) / 2.0;
gl_Position = position;
}
`;
/* eslint-disable */
/**
* @file 顶点文件,webgl 2.0
* @author wangqun
* @desc  顶点坐标系转换,适配webgl2
*/
export default `#version 300 es
in vec4 position;
out vec2 vCoord;
void main() {
vCoord.x = (position.x + 1.0) / 2.0;
vCoord.y = (position.y + 1.0) / 2.0;
gl_Position = position;
}
`;
/**
* @file 获取当前环境的max uniform变量
* @author yangmingming
*/
// uniform变量类型
const enums = {
0x8B50: 'FLOAT_VEC2',
0x8B51: 'FLOAT_VEC3',
0x8B52: 'FLOAT_VEC4',
0x8B53: 'INT_VEC2',
0x8B54: 'INT_VEC3',
0x8B55: 'INT_VEC4',
0x8B56: 'BOOL',
0x8B57: 'BOOL_VEC2',
0x8B58: 'BOOL_VEC3',
0x8B59: 'BOOL_VEC4',
0x8B5A: 'FLOAT_MAT2',
0x8B5B: 'FLOAT_MAT3',
0x8B5C: 'FLOAT_MAT4',
0x8B5E: 'SAMPLER_2D',
0x8B60: 'SAMPLER_CUBE',
0x1400: 'BYTE',
0x1401: 'UNSIGNED_BYTE',
0x1402: 'SHORT',
0x1403: 'UNSIGNED_SHORT',
0x1404: 'INT',
0x1405: 'UNSIGNED_INT',
0x1406: 'FLOAT'
};
export default function(gl, program) {
// max fragment shader, 安卓是256, 桌面chrome浏览器是1024
const result = {
attributes: [],
uniforms: [],
attributeCount: 0,
uniformCount: 0,
maxVertexShader: gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS),
maxFragmentShader: gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS)
};
const activeUniforms = gl.getProgramParameter(program, gl.ACTIVE_UNIFORMS);
const activeAttributes = gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES);
// Loop through active uniforms
for (let i = 0; i < activeUniforms; i++) {
const uniform = gl.getActiveUniform(program, i);
uniform.typeName = enums[uniform.type];
result.uniforms.push(uniform);
result.uniformCount += uniform.size;
}
// Loop through active attributes
for (let i = 0; i < activeAttributes; i++) {
const attribute = gl.getActiveAttrib(program, i);
attribute.typeName = enums[attribute.type];
result.attributes.push(attribute);
result.attributeCount += attribute.size;
}
return result;
};
export default {
'608': {
modelPath: 'faceModel',
feedShape: {
fw: 608,
fh: 608
},
outputShapes: {
from: [19, 19, 25, 1],
to: [19, 19, 5, 5]
}
},
'320': {
modelPath: 'facemodel320',
feedShape: {
fw: 320,
fh: 320
},
outputShapes: {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
}
},
'320fused': {
modelPath: 'facemodelfused',
feedShape: {
fw: 320,
fh: 320
},
outputShapes: {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
}
},
'separate': {
modelPath: 'separablemodel',
feedShape: {
fw: 320,
fh: 320
},
outputShapes: {
from: [10, 10, 25, 1],
to: [10, 10, 5, 5]
}
}
};
/* eslint-disable */
import Utils from './utils';
import Tensor from './tensor';
/**
* @file op的数据对象
* @author wangqun, yangmingming
*
*/
const keys = [
'paddings',
'strides',
'dilations',
'ksize'
];
// 从tensor对象中获取的数据
const tensorAttrs = [
'length_shape',
'width_shape',
'height_shape',
'width_texture',
'height_texture',
'offset_x',
'offset_y',
'limit',
'channel',
'total_shape'
];
// shader中需要的常量
const shaderAttrs = {
scale: {
'bias': 'bias_value',
'scale': 'multi_value'
},
pool2d: {
'pooling_type': 'type_pool'
},
pool2d_winograd: {
'pooling_type': 'type_pool'
}
};
// model的名字和paddleJS的tensor名字mapping
const tensorName = {
'input': 'origin',
'x': 'origin',
'filter': 'filter',
'y': 'counter',
'output': 'out',
'out': 'out',
'scale': 'scale',
'bias': 'bias',
'mean': 'mean',
'variance': 'variance'
};
// unique behavior
const opBehavior = {
conv2d: [
'needBatch',
'isApplySeparableConv'
],
batchnorm: [
'needBatch',
'mergeTensor'
],
elementwise_add: [
'broadcast',
'needBatch'
],
conv2d_elementwise_add: [
'mergeAttrs',
'setActiveFunc',
'needBatch'
],
pool2d: [
'isMax',
'needBatch',
'setPacked',
'isGlobalPooling'
],
relu: [
'transToPrelu',
'needBatch'
],
relu6: [
'transToRelu6',
'needBatch'
],
leaky_relu: [
'transToLeakyrelu',
'needBatch'
],
mul: [
'reshape',
'needBatch'
],
softmax: [
]
};
const mergeType = 'conv2d-elementwise_add';
export default class OpData {
constructor(name, input = {}, output = {}, attrs = {}) {
console.log('now in constructor');
console.dir(name);
console.dir(input);
console.dir(output);
this.realName = name;
this.name = name;
this.attrs = attrs;
// 检查是否是融合op
this.checkIsMerge();
// 是否忽略当前当前op, 使用dropout
// dropout是指在深度学习网络的训练过程中,对于神经网络单元,按照一定的概率将其暂时从网络中丢弃。
this.isPass = this.checkIsPass();
if (this.isPass) {
this.input = input;
this.output = output;
// op数据, 当前不扩展
this.data = {
'active_function': 'scale',
'multi_value': '1.0',
'bias_value': '0.0'
};
// tensor数据
this.tensor = {};
this.buildTensor();
this.buildAttrs();
}
}
buildTensor() {
// todo: 是否需要形状对齐
// todo: 是否需要广播tensor
const tensorData = [];
for (let key in this.input) {
if (this.input.hasOwnProperty(key)) {
const data = this.input[key] || [{}];
// 默认取第一个数据
if (tensorName[key.toLowerCase()]) {
data[0].tensorName = tensorName[key.toLowerCase()];
tensorData.push(data[0]);
}
}
}
// debugger
// todo: 临时删除output里的Y
delete this.output.Y;
// 输出tensor
for (let key in this.output) {
if (this.output.hasOwnProperty(key)) {
// 默认取第一个数据
const data = this.output[key] || [{}];
if (tensorName[key.toLowerCase()]) {
data[0].tensorName = tensorName[key.toLowerCase()];
tensorData.push(data[0]);
}
}
}
// unique behavior
const behavior = opBehavior[this.name] || [];
behavior.forEach(behavior => {
this[behavior](tensorData);
});
// 生成tensor对象
tensorData.forEach(data => {
// console.log(data);
if (data) {
if (data.notTensor) {
this.tensor[data.tensorName] = {
name: data.tensorName,
data: new Float32Array(data.data),
total_shape: data.data.length
};
} else {
this.tensor[data.tensorName] = new Tensor({
type: data.name,
name: data.tensorName,
shape: data.shape,
data: data.data,
needBatch: data.needBatch || false,
notCompressed: data.notCompressed || false,
isPacked: data.isPacked || false
});
}
}
});
// console.dir(['tensors', this.tensor]);
// console.log('now in buildTensor show this and tensorData');
// console.log(this);
// console.log(tensorData);
}
buildAttrs() {
// 计算属性
for (let key in this.attrs) {
if (this.attrs.hasOwnProperty(key)) {
const item = this.attrs[key];
if (Object.prototype.toString.call(item) === '[object Array]') {
if (keys.indexOf(key) > -1) {
this.data[key + '_x'] = item[0];
this.data[key + '_y'] = item[1];
}
} else {
this.data[key] = item;
// 获取shader所需的数据
let shaderAttr = shaderAttrs[this.name];
if (shaderAttr && shaderAttr.hasOwnProperty(key)) {
this.data[shaderAttr[key]] = item;
}
}
}
}
// 获取tensor的数据
for (let key in this.tensor) {
const tensor = this.tensor[key];
tensorAttrs.forEach(attr => {
this.data[attr+ '_' + tensor.name] = tensor[attr];
});
}
}
needBatch(tensorData = []) {
tensorData.forEach(data => (data.needBatch = true));
}
isGlobalPooling(tensorData = []) {
let counter = tensorData.filter(tensor => (tensor.tensorName === 'origin'))[0] || {};
let length = counter.shape && counter.shape.length || 0;
if (length > 2 && this.attrs['global_pooling']) {
this.attrs.ksize = [counter.shape[length - 2], counter.shape[length - 1]];
}
}
mergeAttrs() {
this.attrs = this.attrs.reduce((attrs, item) => {
return Object.assign(attrs, item);
}, {});
}
isApplyWinoGrad(tensorData = []) {
const filter = tensorData.filter(item => {
const [b, c, h, w] = item.shape;
return (h === 3) && (w === 3) && (item.tensorName === 'filter');
});
// 使用winograd算法
if (filter && filter.length) {
this.setPacked(tensorData);
this.applyWinograd(tensorData);
this.setOutputPacked(tensorData);
this.name += '_winograd';
}
}
isApplySeparableConv(tensorData = []) {
const groups = this.attrs.groups;
const filter = tensorData.filter(item => {
const [b, c, h, w] = item.shape;
return (b === groups) && (c === 1) && (item.tensorName === 'filter');
});
if (filter && filter.length) {
// 可以执行separable conv
this.name += '_depthwise';
}
}
setPacked(tensorData = []) {
const isPacked = this.attrs.ispacked;
tensorData.forEach(item => {
if (item.tensorName === 'origin' && isPacked) {
item.isPacked = true;
if (this.name.indexOf('pool') > -1) {
this.name += '_winograd';
}
}
});
}
applyWinograd(tensorData = []) {
tensorData.forEach(item => {
if (item.tensorName === 'filter') {
const [b, c, h, w] = item.shape;
item.shape = [b, c, 4, 4];
item.data = Utils.applyFilterWinograd(item.data, item.shape);
}
});
}
setOutputPacked(tensorData = []) {
tensorData.forEach(item => {
if (item.tensorName === 'out') {
item.isPacked = true;
}
});
}
broadcast(tensorData = []) {
tensorData.forEach(item => {
if (item.tensorName === 'counter') {
item.notTensor = true;
}
});
return;
// mobilenet model
// todo: 默认y的shape length是1, 以后需要实现通用版本
console.log('2. x and y is ');
console.log(x);
console.log(y);
let shape = Utils.getBroadcastShapeInPaddle(x.shape, y.shape, this.attrs['axis']);
// 填充shape数据
if (small.shape.length === 1) {
const result = [];
small.shape = shape;
let total = shape.reduce((all, num) => all * num);
for (let i = 0; i < small.shape[0]; i++) {
let item = small.data[i];
for (let j = 0; j < total / shape[0]; j++) {
result.push(item);
}
}
small.data = result;
}
}
isMax(tensorData = []) {
const type = this.attrs['pooling_type'] === 'max' ? 1 : 0;
this.attrs['pooling_type'] = type;
if (type === 1) {
this.name += '_max';
}
}
transToPrelu(tensorData = []) {
this.data['multi_value'] = '0.0';
this.data['active_function'] = 'prelu';
}
transToRelu6(tensorData = []) {
this.data['multi_value'] = this.attrs['threshold'];
this.data['active_function'] = 'relu6';
}
transToLeakyrelu(tensorData = []) {
this.data['multi_value'] = this.attrs.alpha;
this.data['active_function'] = 'leakyRelu';
this.name = 'relu';
}
setActiveFunc() {
// 用于融合op
const suffix = this.realName.replace(mergeType + '-', '');
if (suffix === 'leaky_relu') {
this.data['multi_value'] = this.attrs.alpha;
this.data['active_function'] = 'leakyRelu';
}
}
reshape(tensorData = []) {
let input = tensorData[0];
let counter = tensorData[1];
if (counter.shape.length > input.shape.length) {
input = tensorData[1];
counter = tensorData[0];
}
if (input.shape.length > 2 && counter.shape.length === 2) {
let shape = Utils.getReshapeInPaddle(input.shape, counter.shape, tensorData[2].shape);
input.shape = shape;
}
}
mergeTensor(tensorData = []) {
// 融合scale、bias、variance、mean
let constants = ['scale', 'bias', 'variance', 'mean'];
let result = {};
let data = [];
tensorData.forEach((tensor, index) => {
result[tensor.tensorName] = tensor;
result[tensor.tensorName + 'Index'] = index;
});
for (let i = 0; i < result[constants[0]].shape[0]; i++) {
data.push(result[constants[0]].data[i]);
data.push(result[constants[1]].data[i]);
data.push(result[constants[2]].data[i]);
data.push(result[constants[3]].data[i]);
}
tensorData[result[constants[0] + 'Index']].data = data;
for (let i = 0; i < constants.length; i++){
tensorData[result[constants[i] + 'Index']].data = result[constants[i]].data;
}
// 充分利用shader空间
tensorData[result[constants[0] + 'Index']].notCompressed = true;
tensorData[result[constants[0] + 'Index']].shape[0] *= 4;
tensorData.splice(result[constants[1] + 'Index'], 1, 0);
tensorData.splice(result[constants[2] + 'Index'], 1, 0);
tensorData.splice(result[constants[3] + 'Index'], 1, 0);
}
checkIsMerge() {
if (this.name.indexOf(mergeType) > -1
&& Object.prototype.toString.apply(this.attrs) === '[object Array]') {
// 第一个融合op
this.name = 'conv2d_elementwise_add';
return true;
}
return false;
}
checkIsPass() {
if (this.name === 'dropout') {
if (this.attrs['dropout_implementation'] === 'downgrade_in_infer') {
this.name = 'scale';
this.attrs['scale'] = this.attrs['dropout_prob'];
this.attrs['bias'] = 0.0;
return true;
}
return false;
}
if (this.name === 'depthwise_conv2d') {
this.name = 'conv2d';
}
return true;
}
dispose() {
this.input = null;
this.output = null;
this.attrs = null;
for (let key in this.tensor) {
this.tensor[key].dispose();
}
this.tensor = {};
}
}
import Utils from './utils';
/**
* @file Tensor类
* @author wangqun, yangmingming
*/
export default class Tensor {
constructor(opts = {}) {
this.opts = opts;
// 数据存储方式
this.isPacked = this.isPacked || false;
// 设置tensor名字
this.name = opts.name;
// tensor的形状
let shape = this.shape = opts.shape;
// 原始数据个数
this.total = shape.reduce((all, num) => all * num);
// 图像tensor是否带有batch
if (opts.needBatch && shape.length < 4) {
let batch = [];
for (let i = 0; i < (4 - shape.length); i++) {
batch.push(1);
}
shape = batch.concat(shape);
this.shape = shape;
}
// 获取转换到texture后的信息
let {offsetX, offsetY, exceedMax, zeroNumber, shape: shape_texture} = Utils.getTextureInfoFromTensorShape(shape, opts.isPacked);
this.shape_texture = shape_texture;
this.exceedMax = exceedMax;
this.offsetX = offsetX;
this.offsetY = offsetY;
// tensor数据
let data;
if (opts.type === 'image' || opts.type === 'x') {
console.log('image', this.data);
this.data = opts.data;
}
else if (opts.data && opts.data.length) {
data = new Float32Array(opts.data.length);
if (!opts.notCompressed) {
let b = shape[0];
let c = shape[1];
let h = shape[2];
let w = shape[3];
if (w) {
for (let i = 0; i < opts.data.length; i++) {
let j = i / (c * w) | 0;
let k = i % (c * w);
let b1 = j / h | 0;
let h1 = j % h;
let c1 = k % c;
let w1 = k / c | 0;
let l = b1 * (c * h * w) + c1 * (h * w) + h1 * (w) + w1;
data[i] = opts.data[l];
}
this.data = data;
}
else {
if (opts.data.length > this.total) {
opts.data = opts.data.slice(0, this.total);
}
this.data = new Float32Array(opts.data);
debugger;
}
} else {
// batchnorm的scale
this.shape_texture = [4, 1, this.total / 4];
// data = [].concat(opts.data);
this.data = new Float32Array(opts.data);
}
// this.data = new Float32Array(data);
// console.log('this.data.length', this.data.length);
// 清理缓存
opts.data = null;
}
}
/**
* 获取数组下标, shape例子[M, W, H, D]
* @param pos {Array} tensor坐标索引
* @return {Number} tensor数据
*/
getValue(pos = []) {
let p = [].concat(pos);
let len = p.length;
let sLen = this.shape.length;
// 补齐
for (let i = 0; i < (sLen - len); i++) {
p.unshift(0);
}
let index = 0;
for (let i = 0; i < sLen; i++) {
index += p[i] * this.shapeNumbers[i];
}
return this.data[index];
}
get width_texture() {
let length = this.shape_texture.length;
return this.shape_texture[length - 1];
}
get height_texture() {
let length = this.shape_texture.length;
return this.shape_texture[length - 2];
}
get width_shape() {
let length = this.shape.length;
return this.shape[length - 1];
}
get height_shape() {
let length = this.shape.length;
return this.shape[length - 2];
}
get channel() {
let length = this.shape.length;
if (length >= 3) {
return this.shape[length - 3];
}
return 0;
}
get offset_x() {
return this.offsetX;
}
get offset_y() {
return this.offsetY;
}
get limit() {
return this.exceedMax ? 'Limit' : '';
}
get length_shape() {
return this.shape.length || 0;
}
/**
* 获取shape对应的个数
* @return {Array} 和shape长度相等的对应个数
*/
get numbers_shape() {
let numbers = [];
let sLen = this.shape.length;
for (let i = 0; i < (sLen - 1); i++) {
let number = this.shape.slice(i + 1).reduce((total, num) => total * num);
numbers.push(number);
}
// 和shape长度保持一致
numbers.push(1);
return numbers;
}
get total_shape() {
return this.total;
}
dispose() {
if (this.data) {
this.data = null;
}
}
}
/**
* @file 工具类
* @author wangqun, yangmingming
*/
export default {
// todo: 适用2维矩阵乘法,以后实现通用版本
getReshapeInPaddle(inputShape = [], counterShape = [], outShape = []) {
let total = inputShape.reduce((all, num) => all * num);
if (outShape.length === 1) {
return [1, total];
} else {
return [outShape[0], total / outShape[0]];
}
},
getBroadcastShapeInPaddle(shapeA= [], shapeB = [], axis = 1) {
// todo: 简易版本,以后需要实现一个通用版本
let bigger = shapeA;
let result = shapeB;
if (shapeA.length - shapeB.length < 0) {
bigger = shapeB;
result = shapeA;
}
return result.concat(bigger.slice(axis));
},
getBroadcastDims(inShape = [], outShape = []) {
const inRank = inShape.length;
const dims = [];
for (let i = 0; i < inRank; i++) {
const dim = inRank - 1 - i;
const a = inShape[dim] || 1;
const b = outShape[outShape.length - 1 - i] || 1;
if (b > 1 && a === 1) {
dims.unshift(dim);
}
}
return dims;
},
getBroadcastShape(shapeA = [], shapeB = []) {
const result = [];
const max = Math.max(shapeA.length, shapeB.length);
for (let i = 0; i < max; i++) {
let a = shapeA[shapeA.length - i - 1];
if (a === null) {
a = 1;
}
let b = shapeB[shapeB.length - i - 1];
if (b === null) {
b = 1;
}
if (a === 1) {
result.unshift(b);
} else if (b === 1) {
result.unshift(a);
} else if (a !== b) {
return null;
} else {
result.unshift(a);
}
}
return result;
},
applyFilterWinograd(data, shape) {
const [b, c, h, w] = shape;
let offset = 0;
let index = 0;
const result = new Float32Array(b * c * 16);
// h和w是3、3
const size2D = 9;
for (let i = 0; i < b; i++) {
// let index = i * c * size2D;
for (let j = 0; j < c; j++) {
// index += j * size2D;
const filter = data.subarray(index, index + size2D);
const [f11, f12, f13, f21, f22, f23, f31, f32, f33] = filter;
const square = [
f11,
0.5 * f11 + 0.5 * f12 + 0.5 * f13,
0.5 * f11 - 0.5 * f12 + 0.5 * f13,
f13,
0.5 * f11 + 0.5 * f21 + 0.5 * f31,
0.25 * f11 + 0.25 * f12 + 0.25 * f13 + 0.25 * f21 + 0.25 * f22 + 0.25 * f23 + 0.25 * f31 + 0.25 * f32 + 0.25 * f33,
0.25 * f11 - 0.25 * f12 + 0.25 * f13 + 0.25 * f21 - 0.25 * f22 + 0.25 * f23 + 0.25 * f31 - 0.25 * f32 + 0.25 * f33,
0.5 * f13 + 0.5 * f23 + 0.5 * f33,
0.5 * f11 - 0.5 * f21 + 0.5 * f31,
0.25 * f11 + 0.25 * f12 + 0.25 * f13 - 0.25 * f21 - 0.25 * f22 - 0.25 * f23 + 0.25 * f31 + 0.25 * f32 + 0.25 * f33,
0.25 * f11 - 0.25 * f12 + 0.25 * f13 - 0.25 * f21 + 0.25 * f22 - 0.25 * f23 + 0.25 * f31 - 0.25 * f32 + 0.25 * f33,
0.5 * f13 - 0.5 * f23 + 0.5 * f33,
f31,
0.5 * f31 + 0.5 * f32 + 0.5 * f33,
0.5 * f31 - 0.5 * f32 + 0.5 * f33,
f33
];
result.set(square, offset);
offset += 16;
index += size2D;
}
}
return result;
},
/**
* 获取texture形状和补0个数
* @param shape {Array} tensor的形状
* @return {{shape: *[], zeroNumber: number}} {Object} texture信息
*/
getTextureInfoFromTensorShape(shape = [], isPacked = false) {
let b = shape[0] || 1;
let c = shape[1] || 1;
let h = shape[2] || 1;
let w = shape[3] || 1;
let height = b * h;
let width = c * w;
let offsetX = 0;
let offsetY = 0;
// 安卓和ios的max texture size是4096, 改造存储空间(2bh, cw / 2)
let exceedMax = false;
if (height > 4096 || width > 4096) {
height *= 2;
width = c * (Math.ceil(w / 2));
exceedMax = true;
}
if (isPacked) {
// 紧凑布局
height = b * c * Math.ceil(h / 2);
width = Math.ceil(w / 2);
offsetX = w % 2;
offsetY = h % 2;
}
return {
offsetX,
offsetY,
exceedMax,
shape: [4, height, width],
zeroNumber: 0
};
},
// 获取数组中的最大值和索引
getMaxItem(datas = []) {
let max = Math.max.apply(null, datas);
let index = datas.indexOf(max);
return {value: max, index};
},
// 压缩
async loadShader(name) {
let shader = await fetch(this.getShaderFile(name));
return shader.text();
},
getShaderFile(url) {
// todo: 根据脚手架获取shader文件
const aa = url.split('/');
let length = aa.length;
return '/' + aa[length - 1];
},
img2texture(renderData = {}) {
const {height_texture, width_texture, shape} = renderData;
const total = height_texture * width_texture * 4;
const b = shape[0];
const c = shape[1];
const h = shape[2];
const w = shape[3];
let data = new Float32Array(b * c * h * w * 4);
let offset = 0;
for (let i = 0; i < total; i++) {
let j = (i / (c * w)) | 0;
let k = i % (c * w);
let b1 = j / h | 0;
let h1 = j % h;
let c1 = k % c;
let w1 = k / c | 0;
let l = b1 * (c * h * w) + c1 * (h * w) + h1 * (w) + w1;
data[offset] = renderData.data[l];
offset += 4;
}
renderData.data = data;
}
};
[中文版](./README_cn.md)
# PaddleJS Tests
Unit and functional tests for Baidu paddle.js can be found in this section.
## Basic Usage
Run npm run testunits after having run the install, the target operator execution can be specified, and the correctness of the operator execution can be judged according to the test cases of input and calculation output.
```bash
cd web # Go to root
npm i # Installation dependency
mkdir dist # Create resource directory
cd dist # Enter resource directory
git clone testunits # Get test unit data
mv testunits dist # Move the unit datas to the resource directory
npm run testunits # run testunits
```
## Browser coverage
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
\ No newline at end of file
# PaddleJS 单元测试
百度 PaddleJS 的单元和功能测试可以在本部分实现。
## 基本用法
执行 npm run testunits 可以指定目标算子执行,根据输入和计算输出的测试用例判断算子执行正确性。
```bash
cd web # 进入根目录
npm i # 安装依赖
mkdir dist # 创建资源目录
cd dist # 进入资源目录
git clone testnuits # 获取模型
mv testnuits dist # 移动单元测试数据移动到指定地点
cd .. # 返回根目录
npm run testunits # 启动 testunits 单元测试
```
## 浏览器覆盖面
* PC: Chrome
* Mac: Chrome
* Android: Baidu App and QQ Browser
/*!
diff v2.0.1
Software License Agreement (BSD License)
Copyright (c) 2009-2015, Kevin Decker <kpdecker@gmail.com>
All rights reserved.
Redistribution and use of this software in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above
copyright notice, this list of conditions and the
following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
* Neither the name of Kevin Decker nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
@license
*/
(function webpackUniversalModuleDefinition(root, factory) {
if(typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
else if(typeof define === 'function' && define.amd)
define(factory);
else if(typeof exports === 'object')
exports["JsDiff"] = factory();
else
root["JsDiff"] = factory();
})(this, function() {
return /******/ (function(modules) { // webpackBootstrap
/******/ // The module cache
/******/ var installedModules = {};
/******/ // The require function
/******/ function __webpack_require__(moduleId) {
/******/ // Check if module is in cache
/******/ if(installedModules[moduleId])
/******/ return installedModules[moduleId].exports;
/******/ // Create a new module (and put it into the cache)
/******/ var module = installedModules[moduleId] = {
/******/ exports: {},
/******/ id: moduleId,
/******/ loaded: false
/******/ };
/******/ // Execute the module function
/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/ // Flag the module as loaded
/******/ module.loaded = true;
/******/ // Return the exports of the module
/******/ return module.exports;
/******/ }
/******/ // expose the modules object (__webpack_modules__)
/******/ __webpack_require__.m = modules;
/******/ // expose the module cache
/******/ __webpack_require__.c = installedModules;
/******/ // __webpack_public_path__
/******/ __webpack_require__.p = "";
/******/ // Load entry module and return exports
/******/ return __webpack_require__(0);
/******/ })
/************************************************************************/
/******/ ([
/* 0 */
/***/ function(module, exports, __webpack_require__) {
/* See LICENSE file for terms of use */
/*
* Text diff implementation.
*
* This library supports the following APIS:
* JsDiff.diffChars: Character by character diff
* JsDiff.diffWords: Word (as defined by \b regex) diff which ignores whitespace
* JsDiff.diffLines: Line based diff
*
* JsDiff.diffCss: Diff targeted at CSS content
*
* These methods are based on the implementation proposed in
* "An O(ND) Difference Algorithm and its Variations" (Myers, 1986).
* http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.6927
*/
'use strict';
exports.__esModule = true;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _diffBase = __webpack_require__(1);
var _diffBase2 = _interopRequireDefault(_diffBase);
var _diffCharacter = __webpack_require__(3);
var _diffWord = __webpack_require__(4);
var _diffLine = __webpack_require__(5);
var _diffSentence = __webpack_require__(6);
var _diffCss = __webpack_require__(7);
var _diffJson = __webpack_require__(8);
var _patchApply = __webpack_require__(9);
var _patchCreate = __webpack_require__(10);
var _convertDmp = __webpack_require__(12);
var _convertXml = __webpack_require__(13);
exports.Diff = _diffBase2['default'];
exports.diffChars = _diffCharacter.diffChars;
exports.diffWords = _diffWord.diffWords;
exports.diffWordsWithSpace = _diffWord.diffWordsWithSpace;
exports.diffLines = _diffLine.diffLines;
exports.diffTrimmedLines = _diffLine.diffTrimmedLines;
exports.diffSentences = _diffSentence.diffSentences;
exports.diffCss = _diffCss.diffCss;
exports.diffJson = _diffJson.diffJson;
exports.structuredPatch = _patchCreate.structuredPatch;
exports.createTwoFilesPatch = _patchCreate.createTwoFilesPatch;
exports.createPatch = _patchCreate.createPatch;
exports.applyPatch = _patchApply.applyPatch;
exports.convertChangesToDMP = _convertDmp.convertChangesToDMP;
exports.convertChangesToXML = _convertXml.convertChangesToXML;
exports.canonicalize = _diffJson.canonicalize;
/***/ },
/* 1 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports['default'] = Diff;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _utilMap = __webpack_require__(2);
var _utilMap2 = _interopRequireDefault(_utilMap);
function Diff(ignoreWhitespace) {
this.ignoreWhitespace = ignoreWhitespace;
}
Diff.prototype = {
diff: function diff(oldString, newString, callback) {
var self = this;
function done(value) {
if (callback) {
setTimeout(function () {
callback(undefined, value);
}, 0);
return true;
} else {
return value;
}
}
// Allow subclasses to massage the input prior to running
oldString = this.castInput(oldString);
newString = this.castInput(newString);
// Handle the identity case (this is due to unrolling editLength == 0
if (newString === oldString) {
return done([{ value: newString }]);
}
if (!newString) {
return done([{ value: oldString, removed: true }]);
}
if (!oldString) {
return done([{ value: newString, added: true }]);
}
newString = this.removeEmpty(this.tokenize(newString));
oldString = this.removeEmpty(this.tokenize(oldString));
var newLen = newString.length,
oldLen = oldString.length;
var editLength = 1;
var maxEditLength = newLen + oldLen;
var bestPath = [{ newPos: -1, components: [] }];
// Seed editLength = 0, i.e. the content starts with the same values
var oldPos = this.extractCommon(bestPath[0], newString, oldString, 0);
if (bestPath[0].newPos + 1 >= newLen && oldPos + 1 >= oldLen) {
// Identity per the equality and tokenizer
return done([{ value: newString.join('') }]);
}
// Main worker method. checks all permutations of a given edit length for acceptance.
function execEditLength() {
for (var diagonalPath = -1 * editLength; diagonalPath <= editLength; diagonalPath += 2) {
var basePath = undefined;
var addPath = bestPath[diagonalPath - 1],
removePath = bestPath[diagonalPath + 1],
_oldPos = (removePath ? removePath.newPos : 0) - diagonalPath;
if (addPath) {
// No one else is going to attempt to use this value, clear it
bestPath[diagonalPath - 1] = undefined;
}
var canAdd = addPath && addPath.newPos + 1 < newLen,
canRemove = removePath && 0 <= _oldPos && _oldPos < oldLen;
if (!canAdd && !canRemove) {
// If this path is a terminal then prune
bestPath[diagonalPath] = undefined;
continue;
}
// Select the diagonal that we want to branch from. We select the prior
// path whose position in the new string is the farthest from the origin
// and does not pass the bounds of the diff graph
if (!canAdd || canRemove && addPath.newPos < removePath.newPos) {
basePath = clonePath(removePath);
self.pushComponent(basePath.components, undefined, true);
} else {
basePath = addPath; // No need to clone, we've pulled it from the list
basePath.newPos++;
self.pushComponent(basePath.components, true, undefined);
}
_oldPos = self.extractCommon(basePath, newString, oldString, diagonalPath);
// If we have hit the end of both strings, then we are done
if (basePath.newPos + 1 >= newLen && _oldPos + 1 >= oldLen) {
return done(buildValues(basePath.components, newString, oldString, self.useLongestToken));
} else {
// Otherwise track this path as a potential candidate and continue.
bestPath[diagonalPath] = basePath;
}
}
editLength++;
}
// Performs the length of edit iteration. Is a bit fugly as this has to support the
// sync and async mode which is never fun. Loops over execEditLength until a value
// is produced.
if (callback) {
(function exec() {
setTimeout(function () {
// This should not happen, but we want to be safe.
/* istanbul ignore next */
if (editLength > maxEditLength) {
return callback();
}
if (!execEditLength()) {
exec();
}
}, 0);
})();
} else {
while (editLength <= maxEditLength) {
var ret = execEditLength();
if (ret) {
return ret;
}
}
}
},
pushComponent: function pushComponent(components, added, removed) {
var last = components[components.length - 1];
if (last && last.added === added && last.removed === removed) {
// We need to clone here as the component clone operation is just
// as shallow array clone
components[components.length - 1] = { count: last.count + 1, added: added, removed: removed };
} else {
components.push({ count: 1, added: added, removed: removed });
}
},
extractCommon: function extractCommon(basePath, newString, oldString, diagonalPath) {
var newLen = newString.length,
oldLen = oldString.length,
newPos = basePath.newPos,
oldPos = newPos - diagonalPath,
commonCount = 0;
while (newPos + 1 < newLen && oldPos + 1 < oldLen && this.equals(newString[newPos + 1], oldString[oldPos + 1])) {
newPos++;
oldPos++;
commonCount++;
}
if (commonCount) {
basePath.components.push({ count: commonCount });
}
basePath.newPos = newPos;
return oldPos;
},
equals: function equals(left, right) {
var reWhitespace = /\S/;
return left === right || this.ignoreWhitespace && !reWhitespace.test(left) && !reWhitespace.test(right);
},
removeEmpty: function removeEmpty(array) {
var ret = [];
for (var i = 0; i < array.length; i++) {
if (array[i]) {
ret.push(array[i]);
}
}
return ret;
},
castInput: function castInput(value) {
return value;
},
tokenize: function tokenize(value) {
return value.split('');
}
};
function buildValues(components, newString, oldString, useLongestToken) {
var componentPos = 0,
componentLen = components.length,
newPos = 0,
oldPos = 0;
for (; componentPos < componentLen; componentPos++) {
var component = components[componentPos];
if (!component.removed) {
if (!component.added && useLongestToken) {
var value = newString.slice(newPos, newPos + component.count);
value = _utilMap2['default'](value, function (value, i) {
var oldValue = oldString[oldPos + i];
return oldValue.length > value.length ? oldValue : value;
});
component.value = value.join('');
} else {
component.value = newString.slice(newPos, newPos + component.count).join('');
}
newPos += component.count;
// Common case
if (!component.added) {
oldPos += component.count;
}
} else {
component.value = oldString.slice(oldPos, oldPos + component.count).join('');
oldPos += component.count;
// Reverse add and remove so removes are output first to match common convention
// The diffing algorithm is tied to add then remove output and this is the simplest
// route to get the desired output with minimal overhead.
if (componentPos && components[componentPos - 1].added) {
var tmp = components[componentPos - 1];
components[componentPos - 1] = components[componentPos];
components[componentPos] = tmp;
}
}
}
return components;
}
function clonePath(path) {
return { newPos: path.newPos, components: path.components.slice(0) };
}
module.exports = exports['default'];
/***/ },
/* 2 */
/***/ function(module, exports) {
// Following this pattern to make sure the ignore next is in the correct place after babel builds
"use strict";
exports.__esModule = true;
exports["default"] = map;
/* istanbul ignore next */
function map(arr, mapper, that) {
if (Array.prototype.map) {
return Array.prototype.map.call(arr, mapper, that);
}
var other = new Array(arr.length);
for (var i = 0, n = arr.length; i < n; i++) {
other[i] = mapper.call(that, arr[i], i, arr);
}
return other;
}
module.exports = exports["default"];
/***/ },
/* 3 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.diffChars = diffChars;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
var _base2 = _interopRequireDefault(_base);
var characterDiff = new _base2['default']();
exports.characterDiff = characterDiff;
function diffChars(oldStr, newStr, callback) {
return characterDiff.diff(oldStr, newStr, callback);
}
/***/ },
/* 4 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.diffWords = diffWords;
exports.diffWordsWithSpace = diffWordsWithSpace;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
// Based on https://en.wikipedia.org/wiki/Latin_script_in_Unicode
//
// Ranges and exceptions:
// Latin-1 Supplement, 0080–00FF
// - U+00D7 × Multiplication sign
// - U+00F7 ÷ Division sign
// Latin Extended-A, 0100–017F
// Latin Extended-B, 0180–024F
// IPA Extensions, 0250–02AF
// Spacing Modifier Letters, 02B0–02FF
// - U+02C7 ˇ &#711; Caron
// - U+02D8 ˘ &#728; Breve
// - U+02D9 ˙ &#729; Dot Above
// - U+02DA ˚ &#730; Ring Above
// - U+02DB ˛ &#731; Ogonek
// - U+02DC ˜ &#732; Small Tilde
// - U+02DD ˝ &#733; Double Acute Accent
// Latin Extended Additional, 1E00–1EFF
var _base2 = _interopRequireDefault(_base);
var extendedWordChars = /^[A-Za-z\xC0-\u02C6\u02C8-\u02D7\u02DE-\u02FF\u1E00-\u1EFF]+$/;
var wordDiff = new _base2['default'](true);
exports.wordDiff = wordDiff;
var wordWithSpaceDiff = new _base2['default']();
exports.wordWithSpaceDiff = wordWithSpaceDiff;
wordDiff.tokenize = wordWithSpaceDiff.tokenize = function (value) {
var tokens = value.split(/(\s+|\b)/);
// Join the boundary splits that we do not consider to be boundaries. This is primarily the extended Latin character set.
for (var i = 0; i < tokens.length - 1; i++) {
// If we have an empty string in the next field and we have only word chars before and after, merge
if (!tokens[i + 1] && tokens[i + 2] && extendedWordChars.test(tokens[i]) && extendedWordChars.test(tokens[i + 2])) {
tokens[i] += tokens[i + 2];
tokens.splice(i + 1, 2);
i--;
}
}
return tokens;
};
function diffWords(oldStr, newStr, callback) {
return wordDiff.diff(oldStr, newStr, callback);
}
function diffWordsWithSpace(oldStr, newStr, callback) {
return wordWithSpaceDiff.diff(oldStr, newStr, callback);
}
/***/ },
/* 5 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.diffLines = diffLines;
exports.diffTrimmedLines = diffTrimmedLines;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
var _base2 = _interopRequireDefault(_base);
var lineDiff = new _base2['default']();
exports.lineDiff = lineDiff;
var trimmedLineDiff = new _base2['default']();
exports.trimmedLineDiff = trimmedLineDiff;
trimmedLineDiff.ignoreTrim = true;
lineDiff.tokenize = trimmedLineDiff.tokenize = function (value) {
var retLines = [],
lines = value.split(/^/m);
for (var i = 0; i < lines.length; i++) {
var line = lines[i],
lastLine = lines[i - 1],
lastLineLastChar = lastLine && lastLine[lastLine.length - 1];
// Merge lines that may contain windows new lines
if (line === '\n' && lastLineLastChar === '\r') {
retLines[retLines.length - 1] = retLines[retLines.length - 1].slice(0, -1) + '\r\n';
} else {
if (this.ignoreTrim) {
line = line.trim();
// add a newline unless this is the last line.
if (i < lines.length - 1) {
line += '\n';
}
}
retLines.push(line);
}
}
return retLines;
};
function diffLines(oldStr, newStr, callback) {
return lineDiff.diff(oldStr, newStr, callback);
}
function diffTrimmedLines(oldStr, newStr, callback) {
return trimmedLineDiff.diff(oldStr, newStr, callback);
}
/***/ },
/* 6 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.diffSentences = diffSentences;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
var _base2 = _interopRequireDefault(_base);
var sentenceDiff = new _base2['default']();
exports.sentenceDiff = sentenceDiff;
sentenceDiff.tokenize = function (value) {
return value.split(/(\S.+?[.!?])(?=\s+|$)/);
};
function diffSentences(oldStr, newStr, callback) {
return sentenceDiff.diff(oldStr, newStr, callback);
}
/***/ },
/* 7 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.diffCss = diffCss;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
var _base2 = _interopRequireDefault(_base);
var cssDiff = new _base2['default']();
exports.cssDiff = cssDiff;
cssDiff.tokenize = function (value) {
return value.split(/([{}:;,]|\s+)/);
};
function diffCss(oldStr, newStr, callback) {
return cssDiff.diff(oldStr, newStr, callback);
}
/***/ },
/* 8 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.diffJson = diffJson;
exports.canonicalize = canonicalize;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
var _base2 = _interopRequireDefault(_base);
var _line = __webpack_require__(5);
var objectPrototypeToString = Object.prototype.toString;
var jsonDiff = new _base2['default']();
// Discriminate between two lines of pretty-printed, serialized JSON where one of them has a
// dangling comma and the other doesn't. Turns out including the dangling comma yields the nicest output:
exports.jsonDiff = jsonDiff;
jsonDiff.useLongestToken = true;
jsonDiff.tokenize = _line.lineDiff.tokenize;
jsonDiff.castInput = function (value) {
return typeof value === 'string' ? value : JSON.stringify(canonicalize(value), undefined, ' ');
};
jsonDiff.equals = function (left, right) {
return _base2['default'].prototype.equals(left.replace(/,([\r\n])/g, '$1'), right.replace(/,([\r\n])/g, '$1'));
};
function diffJson(oldObj, newObj, callback) {
return jsonDiff.diff(oldObj, newObj, callback);
}
// This function handles the presence of circular references by bailing out when encountering an
// object that is already on the "stack" of items being processed.
function canonicalize(obj, stack, replacementStack) {
stack = stack || [];
replacementStack = replacementStack || [];
var i = undefined;
for (i = 0; i < stack.length; i += 1) {
if (stack[i] === obj) {
return replacementStack[i];
}
}
var canonicalizedObj = undefined;
if ('[object Array]' === objectPrototypeToString.call(obj)) {
stack.push(obj);
canonicalizedObj = new Array(obj.length);
replacementStack.push(canonicalizedObj);
for (i = 0; i < obj.length; i += 1) {
canonicalizedObj[i] = canonicalize(obj[i], stack, replacementStack);
}
stack.pop();
replacementStack.pop();
} else if (typeof obj === 'object' && obj !== null) {
stack.push(obj);
canonicalizedObj = {};
replacementStack.push(canonicalizedObj);
var sortedKeys = [],
key = undefined;
for (key in obj) {
/* istanbul ignore else */
if (obj.hasOwnProperty(key)) {
sortedKeys.push(key);
}
}
sortedKeys.sort();
for (i = 0; i < sortedKeys.length; i += 1) {
key = sortedKeys[i];
canonicalizedObj[key] = canonicalize(obj[key], stack, replacementStack);
}
stack.pop();
replacementStack.pop();
} else {
canonicalizedObj = obj;
}
return canonicalizedObj;
}
/***/ },
/* 9 */
/***/ function(module, exports) {
'use strict';
exports.__esModule = true;
exports.applyPatch = applyPatch;
function applyPatch(oldStr, uniDiff) {
var diffstr = uniDiff.split('\n'),
hunks = [],
i = 0,
remEOFNL = false,
addEOFNL = false;
// Skip to the first change hunk
while (i < diffstr.length && !/^@@/.test(diffstr[i])) {
i++;
}
// Parse the unified diff
for (; i < diffstr.length; i++) {
if (diffstr[i][0] === '@') {
var chnukHeader = diffstr[i].split(/@@ -(\d+),(\d+) \+(\d+),(\d+) @@/);
hunks.unshift({
start: chnukHeader[3],
oldlength: +chnukHeader[2],
removed: [],
newlength: chnukHeader[4],
added: []
});
} else if (diffstr[i][0] === '+') {
hunks[0].added.push(diffstr[i].substr(1));
} else if (diffstr[i][0] === '-') {
hunks[0].removed.push(diffstr[i].substr(1));
} else if (diffstr[i][0] === ' ') {
hunks[0].added.push(diffstr[i].substr(1));
hunks[0].removed.push(diffstr[i].substr(1));
} else if (diffstr[i][0] === '\\') {
if (diffstr[i - 1][0] === '+') {
remEOFNL = true;
} else if (diffstr[i - 1][0] === '-') {
addEOFNL = true;
}
}
}
// Apply the diff to the input
var lines = oldStr.split('\n');
for (i = hunks.length - 1; i >= 0; i--) {
var hunk = hunks[i];
// Sanity check the input string. Bail if we don't match.
for (var j = 0; j < hunk.oldlength; j++) {
if (lines[hunk.start - 1 + j] !== hunk.removed[j]) {
return false;
}
}
Array.prototype.splice.apply(lines, [hunk.start - 1, hunk.oldlength].concat(hunk.added));
}
// Handle EOFNL insertion/removal
if (remEOFNL) {
while (!lines[lines.length - 1]) {
lines.pop();
}
} else if (addEOFNL) {
lines.push('');
}
return lines.join('\n');
}
/***/ },
/* 10 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
exports.structuredPatch = structuredPatch;
exports.createTwoFilesPatch = createTwoFilesPatch;
exports.createPatch = createPatch;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _diffPatch = __webpack_require__(11);
var _utilMap = __webpack_require__(2);
var _utilMap2 = _interopRequireDefault(_utilMap);
function structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options) {
if (!options) {
options = { context: 4 };
}
var diff = _diffPatch.patchDiff.diff(oldStr, newStr);
diff.push({ value: '', lines: [] }); // Append an empty value to make cleanup easier
function contextLines(lines) {
return _utilMap2['default'](lines, function (entry) {
return ' ' + entry;
});
}
var hunks = [];
var oldRangeStart = 0,
newRangeStart = 0,
curRange = [],
oldLine = 1,
newLine = 1;
var _loop = function (i) {
var current = diff[i],
lines = current.lines || current.value.replace(/\n$/, '').split('\n');
current.lines = lines;
if (current.added || current.removed) {
// If we have previous context, start with that
if (!oldRangeStart) {
var prev = diff[i - 1];
oldRangeStart = oldLine;
newRangeStart = newLine;
if (prev) {
curRange = options.context > 0 ? contextLines(prev.lines.slice(-options.context)) : [];
oldRangeStart -= curRange.length;
newRangeStart -= curRange.length;
}
}
// Output our changes
curRange.push.apply(curRange, _utilMap2['default'](lines, function (entry) {
return (current.added ? '+' : '-') + entry;
}));
// Track the updated file position
if (current.added) {
newLine += lines.length;
} else {
oldLine += lines.length;
}
} else {
// Identical context lines. Track line changes
if (oldRangeStart) {
// Close out any changes that have been output (or join overlapping)
if (lines.length <= options.context * 2 && i < diff.length - 2) {
// Overlapping
curRange.push.apply(curRange, contextLines(lines));
} else {
// end the range and output
var contextSize = Math.min(lines.length, options.context);
curRange.push.apply(curRange, contextLines(lines.slice(0, contextSize)));
var hunk = {
oldStart: oldRangeStart,
oldLines: oldLine - oldRangeStart + contextSize,
newStart: newRangeStart,
newLines: newLine - newRangeStart + contextSize,
lines: curRange
};
if (i >= diff.length - 2 && lines.length <= options.context) {
// EOF is inside this hunk
var oldEOFNewline = /\n$/.test(oldStr);
var newEOFNewline = /\n$/.test(newStr);
if (lines.length == 0 && !oldEOFNewline) {
// special case: old has no eol and no trailing context; no-nl can end up before adds
curRange.splice(hunk.oldLines, 0, '\\ No newline at end of file');
} else if (!oldEOFNewline || !newEOFNewline) {
curRange.push('\\ No newline at end of file');
}
}
hunks.push(hunk);
oldRangeStart = 0;
newRangeStart = 0;
curRange = [];
}
}
oldLine += lines.length;
newLine += lines.length;
}
};
for (var i = 0; i < diff.length; i++) {
_loop(i);
}
return {
oldFileName: oldFileName, newFileName: newFileName,
oldHeader: oldHeader, newHeader: newHeader,
hunks: hunks
};
}
function createTwoFilesPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options) {
var diff = structuredPatch(oldFileName, newFileName, oldStr, newStr, oldHeader, newHeader, options);
var ret = [];
if (oldFileName == newFileName) {
ret.push('Index: ' + oldFileName);
}
ret.push('===================================================================');
ret.push('--- ' + diff.oldFileName + (typeof diff.oldHeader === 'undefined' ? '' : '\t' + diff.oldHeader));
ret.push('+++ ' + diff.newFileName + (typeof diff.newHeader === 'undefined' ? '' : '\t' + diff.newHeader));
for (var i = 0; i < diff.hunks.length; i++) {
var hunk = diff.hunks[i];
ret.push('@@ -' + hunk.oldStart + ',' + hunk.oldLines + ' +' + hunk.newStart + ',' + hunk.newLines + ' @@');
ret.push.apply(ret, hunk.lines);
}
return ret.join('\n') + '\n';
}
function createPatch(fileName, oldStr, newStr, oldHeader, newHeader, options) {
return createTwoFilesPatch(fileName, fileName, oldStr, newStr, oldHeader, newHeader, options);
}
/***/ },
/* 11 */
/***/ function(module, exports, __webpack_require__) {
'use strict';
exports.__esModule = true;
// istanbul ignore next
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { 'default': obj }; }
var _base = __webpack_require__(1);
var _base2 = _interopRequireDefault(_base);
var patchDiff = new _base2['default']();
exports.patchDiff = patchDiff;
patchDiff.tokenize = function (value) {
var ret = [],
linesAndNewlines = value.split(/(\n|\r\n)/);
// Ignore the final empty token that occurs if the string ends with a new line
if (!linesAndNewlines[linesAndNewlines.length - 1]) {
linesAndNewlines.pop();
}
// Merge the content and line separators into single tokens
for (var i = 0; i < linesAndNewlines.length; i++) {
var line = linesAndNewlines[i];
if (i % 2) {
ret[ret.length - 1] += line;
} else {
ret.push(line);
}
}
return ret;
};
/***/ },
/* 12 */
/***/ function(module, exports) {
// See: http://code.google.com/p/google-diff-match-patch/wiki/API
"use strict";
exports.__esModule = true;
exports.convertChangesToDMP = convertChangesToDMP;
function convertChangesToDMP(changes) {
var ret = [],
change = undefined,
operation = undefined;
for (var i = 0; i < changes.length; i++) {
change = changes[i];
if (change.added) {
operation = 1;
} else if (change.removed) {
operation = -1;
} else {
operation = 0;
}
ret.push([operation, change.value]);
}
return ret;
}
/***/ },
/* 13 */
/***/ function(module, exports) {
'use strict';
exports.__esModule = true;
exports.convertChangesToXML = convertChangesToXML;
function convertChangesToXML(changes) {
var ret = [];
for (var i = 0; i < changes.length; i++) {
var change = changes[i];
if (change.added) {
ret.push('<ins>');
} else if (change.removed) {
ret.push('<del>');
}
ret.push(escapeHTML(change.value));
if (change.added) {
ret.push('</ins>');
} else if (change.removed) {
ret.push('</del>');
}
}
return ret.join('');
}
function escapeHTML(s) {
var n = s;
n = n.replace(/&/g, '&amp;');
n = n.replace(/</g, '&lt;');
n = n.replace(/>/g, '&gt;');
n = n.replace(/"/g, '&quot;');
return n;
}
/***/ }
/******/ ])
});
;
\ No newline at end of file
import 'babel-polyfill';
import Paddle from '../../src/paddle/paddle';
const unitPath = {
'conv2d': 'model.test.conv2d.json',
'batchnorm': 'model.test.batchnorm.json',
'mul': 'model.test.mul.json',
'pool2d': 'model.test.pool2d.json',
'relu': 'model.test.relu.json',
'scale': 'model.test.scale.json',
'softmax': 'model.test.softmax.json',
'relu6' : 'model.test.relu6.json'
};
// 制定运行的 op
const modelType = 'softmax';
const unitData = unitPath[modelType];
let Diff = require('./diff');
let datas;
let otherResult;
let output
async function run() {
const path = 'test/unitData';
const MODEL_CONFIG = {
dir: `/${path}/`, // 存放模型的文件夹
main: unitData, // 主文件
};
const paddle = new Paddle({
urlConf: MODEL_CONFIG,
options: {
test: true
}
});
let model = await paddle.load();
datas = model.graph.data;
output = deepCopy(datas);
// 测试单元
model.graph.weightMap.forEach(op => {
const type = op.type;
if (type !== 'feed' && type !== 'fetch') {
console.log(op.type);
model.graph.buildOpData(op);
}
});
const executor = model.graph.weightMap;
let inst = model.graph.execute_(executor[0]);
let result = model.graph.inst.read();
console.dir(['result', result]);
var one = model.graph.inst.read();
// var other = getResult('conv2d');
console.log('one');
console.log(one);
console.log('other');
}
run();
function deepCopy (data) {
return JSON.parse(JSON.stringify(data));
}
// let output = deepCopy(datas);
let getTensor = function(id, times = 1) {
let find = 0;
let data = datas.ops.filter((item, idx) => {
if (id === item.type) {
++find;
if (find === times) {
return true;
}
}
});
return getInputs(data[0]);
};
let getInputs = function(data) {
Object.keys(data.inputs).forEach(function(key){
data.inputs[key] = getValue(data.inputs[key][0], datas);
});
Object.keys(data.outputs).forEach(function(key){
let out = getValue(data.outputs[key][0], datas)
data.outputs[key] = out;
otherResult = out[0].data;
});
return data;
};
let getResult = function(id) {
let data = output.ops.filter((item, idx) => {
if (id === item.type) {
return true;
}
});
return getoutputs(data[0]);
};
let getoutputs = function(data) {
let otherResult;
Object.keys(data.outputs).forEach(function(key){
let out = getValue(data.outputs[key][0], output);
otherResult = out[0].data;
});
return otherResult;
};
let getValue = function(name, datas) {
return datas.vars.filter((item, idx) => {
if (name === item.name) {
return item;
}
});
};
// // 测试单元
// let item = getTensor('conv2d');
let func = function (model) {
// console.log(other);
// var one = inst.read();
// var other = getResult('softmax');
// var color ='';
// var span = null;
// var diff = Diff.diffChars(one.toString(), other.toString()),
// display = document.getElementById('display'),
// fragment = document.createDocumentFragment();
//
// diff.forEach(function(part){
// // green for additions, red for deletions
// // grey for common parts
// color = part.added ? 'green' :
// part.removed ? 'red' : 'grey';
// span = document.createElement('span');
// span.style.color = color;
// span.appendChild(document
// .createTextNode(part.value));
// fragment.appendChild(span);
// });
//
// display.appendChild(fragment);
};
import 'babel-polyfill';
import units from './units/units';
let qs = require('qs');
/**
* @file 入口文件
* @author wangqun@baidu.com
*
*/
// 引入 op
const FSHADER_CON2D = require('../src/shader/f_elementwise_conv2d3_shader.c');
const shapeA = [1, 3, 256, 256];
const shapeB = [3];
const imgUrl = require('./data/banana.jpeg');
let shapeAData;
let shapeBData;
let inst;
const matrix = units.mockOrigin();
const filter = units.mockFilter();
// 原始张量,上下左右1个单位的padding,步长是1
let conf = {
'filter_size_width': 3,
'filter_size_height': 3,
'origin_size_width': matrix.sx,
'origin_size_height': matrix.sx,
'out_size_width': 3,
'out_size_height': 3,
'stride_horizontal': 1,
'stride_vertical': 1,
'pad_left': 1,
'pad_top': 1,
'dilation_horizontal': 2,
'dilation_vertical': 2
}
units.init(conf, FSHADER_CON2D).then(instance => {
if (!instance || typeof instance === 'string') {
throw new Error(instance || '不支持float texture');
}
inst = instance;
}).then(() => {
console.dir(['卷积核', filter]);
console.dir(['origin data', matrix.data]);
// 执行conv2d
inst.compute(filter, matrix.data, 'conv2d');
}).then(() => {
// 读取结果
const result = inst.read();
console.dir(['conv2d的执行结果', result]);
let input = {
filter: filter,
origin: matrix.data,
};
Object.assign(input, conf);
console.dir(['完整input', input]);
// console.dir(['完整输入和输出', params]);
inst.getResult('pool2d', input, result);
}).catch(err => {
console.log('-----------error---------' + err);
});
<!DOCYTPE html>
<html>
<head>
<meta charset="utf-8">
<title>paddle web unitTest</title>
<meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no">
<style>
body {
margin: 0;
padding: 0;
}
.paddle-web-wrapper {
position: relative;
width: 100%;
}
.paddle-web-title {
width: 100%;
background-color: blueviolet;
height: 44px;
text-align: center;
color: #fff;
line-height: 44px;
font-size: 18px;
}
.paddle-web-body {
}
#paddle-web-unit-list {
}
#paddle-web-unit-list li {
}
.unit-li-name {
font-size: 16px;
margin: 5px;
font-weight: 700;
letter-spacing: 0;
line-height: 14px;
}
.unit-li-assert {
font-size: 16px;
margin: 5px;
font-weight: 700;
letter-spacing: 0;
line-height: 14px;
}
.pass {
color: #34B458;
}
.no-pass {
color: #b4231b;
}
.unit-li-diff {
margin: 5px;
border: 1px dotted #f71111;
}
p {
word-wrap: break-word;
word-break: normal;
}
span {
word-wrap: break-word;
word-break: normal;
}
#display {
width: 100%;
}
</style>
<!--<script src="unitTest.es6"></script>-->
<!--<script src="testUtils/diff.js"></script>-->
<script src="testUtils/testUtils.es6"></script>
</head>
<body>
<div class="paddle-web-wrapper">
<div class="paddle-web-title">
paddle Web Unit Test
</div>
<div class="paddle-web-body">
<ul id="paddle-web-unit-list">
<li class="unit-li">
<div class="unit-li-name">pool</div>
<div class="unit-li-assert pass">pass</div>
</li>
<li class="unit-li">
<div class="unit-li-name">relu</div>
<div class="unit-li-assert pass">pass</div>
</li>
<li class="unit-li">
<div class="unit-li-name">prelu</div>
<div class="unit-li-assert pass">pass</div>
</li>
<li class="unit-li">
<div class="unit-li-name">softmax</div>
<div class="unit-li-assert pass">pass</div>
</li>
<li class="unit-li">
<div class="unit-li-name">dropout</div>
<div class="unit-li-assert pass">pass</div>
</li>
<li class="unit-li">
<div class="unit-li-name">conv2d</div>
<div class="unit-li-assert pass">pass</div>
</li>
</ul>
<div id="display"></div>
</div>
</div>
</body>
</html>
\ No newline at end of file
import Utils from '../../src/utils/utils';
import Gpu from '../../src/gpu/gpu';
import Matrix from '../../src/utils/dims';
import axios from 'axios';
let qs = require('qs');
/**
* @file gpu运行时
* @author wangqun
*
*/
// v_shader.c表示计算容器
const VSHADER = require('../../src/shader/v_shader.c');
export default {
/**
* 初始化op
* @param {Object} opts 运行时参数,包含el:canvas,dim: 256
* @return {Object} this 实例对象
*/
async init(opts = {}, opShader) {
const gpu = this.gpu = new Gpu(opts);
if (gpu.isFloatingTexture()) {
let texture = gpu.makeTexure(WebGLRenderingContext.FLOAT, null);
let framebuffer = gpu.attachFrameBuffer(texture);
let bufferStatus = gpu.frameBufferIsComplete();
if (bufferStatus.isComplete) {
console.log(bufferStatus.isComplete);
// 获取shader
const vshaderCode = await Utils.loadShader(VSHADER);
let fshaderCode = await Utils.loadShader(opShader);
fshaderCode = Utils.populateData('conv2d', fshaderCode, opts);
gpu.create(vshaderCode, fshaderCode);
return this;
} else {
return bufferStatus.message;
}
} else {
return null;
}
},
/**
* 计算op
* @param bufferA
* @param bufferB
*/
compute(bufferA, bufferB, type) {
this.gpu.render(bufferA, bufferB, type);
},
/**
* 读取op计算结果, 并返回数据
*/
read() {
return this.gpu.compute();
},
// 生成feed数据
feed(pixelData, size) {
return Utils.shapeData(pixelData, size);
},
// mock生成shapeB的数据
mockShapeB(shapeA, shapeB) {
return Utils.mock(shapeA, shapeB);
},
// mock origin 1 * 5 * 5
mockOrigin() {
return new Matrix({
sx: 5,
sy: 5,
depth: 4
});
},
// mock filter 1 * 3 * 3
mockFilter() {
return new Float32Array([1.0, 1.0, 0.0, 0.0, -2.0, 0.0, 1.0, -3.0, 1.0]);
},
// 更新op
updateOp(name) {
// this.gpu.updateShader();
},
// get paddle mobile result
getResult(name, input, output) {
if (name) {
let that = this;
axios.defaults.withCredentials = false;
axios.defaults.headers = {
'Content-type': 'application/x-www-form-urlencoded'
}
axios.post('http://yq01-paddle-mobile.epc.baidu.com:8088/uniTest', qs.stringify({
name: name,
input: JSON.stringify(input, function (key, value) {
if (value.constructor === Float32Array) {
return that.formatData(value);
}else {
return that.formatData(value);
}
}),
output: JSON.stringify(output, function (key, value) {
return that.formatData(value);
})
},{ indices: false }))
.then(function (response) {
if (response.status === 200) {
that.displayResult(response.data);
}
console.log(response);
})
.catch(function (error) {
console.log(error);
});
}
},
displayResult(res) {
if (res.name) {
let assert = (res.correct == 1? 'Pass' : 'Not pass');
let passCls = (res.correct == 1? 'pass' : 'no-pass');
if (res.correct === 1) {
let unitHtml = '<li class="unit-li"><div class="unit-li-name">' + res.name + '</div>' +
'<div class="unit-li-assert">' + assert + '</div>'
'</li>';
let oli = document.createElement('li');
oli.innerHTML = unitHtml;
document.getElementById('paddle-web-unit-list').appendChild(oli);
}
else if (res.correct === 0) {
let serverData = res.server_data;
let unitHtml = '<li class="unit-li"><div class="unit-li-name">' + res.name + '</div>' +
'<div class="unit-li-assert ' + passCls + '">' + assert + '</div>' +
'<div class="unit-li-diff"><p>' + serverData + '</p></div>'
'</li>';
let oli = document.createElement('li');
oli.innerHTML = unitHtml;
document.getElementById('paddle-web-unit-list').appendChild(oli);
}
}
},
formatData(list) {
if (list.constructor === Float32Array) {
return '[' + list.toString() + ']';
}
else {
return list;
}
},
// 释放资源
dispose() {
this.gpu.dispose();
}
};
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册