README.md 13.9 KB
Newer Older
sahduashufa's avatar
sahduashufa 已提交
1
<div align=center><img src="./picture/01.svg"/></div>
sahduashufa's avatar
light  
sahduashufa 已提交
2

sahduashufa's avatar
1.0  
sahduashufa 已提交
3
# Edge-Engine
A
AllenZYJ 已提交
4

5
##Edge : 一个开源的科学计算引擎
sahduashufa's avatar
1.4  
sahduashufa 已提交
6

sahduashufa's avatar
1.0  
sahduashufa 已提交
7
[README for English_version](./README_EN.md)
sahduashufa's avatar
sahduashufa 已提交
8

sahduashufa's avatar
1.0  
sahduashufa 已提交
9 10 11 12
[![license](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000) ](https://github.com/AllenZYJ/Edge-Computing-Engine/blob/add-license-1/LICENSE)![](https://img.shields.io/badge/Bulid-Version1.0-green.svg)

> 项目开始日期 : 2019/10/01

sahduashufa's avatar
sahduashufa 已提交
13
> 目前项目总代码 : 810 行
sahduashufa's avatar
1.0  
sahduashufa 已提交
14
>
sahduashufa's avatar
sahduashufa 已提交
15
> 测试 : main.cpp | nerual_network.cpp | 新增全连接神经网络架构(新增全连接网络正向传播和反向传播的测试demo)
sahduashufa's avatar
1.0  
sahduashufa 已提交
16 17 18 19 20 21 22 23 24 25 26 27
>
> 测试环境:
>
> MacBook Pro
>
> 编译器环境:
>
> Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.14.sdk/usr/include/c++/4.2.1
> Apple LLVM version 10.0.1 (clang-1001.0.46.4)
> Target: x86_64-apple-darwin18.7.0
> Thread model: posix

sahduashufa's avatar
sahduashufa 已提交
28 29 30 31
这是什么?

![path](./picture/path.png)

sahduashufa's avatar
sahduashufa 已提交
32
## 安装编译
sahduashufa's avatar
sahduashufa 已提交
33

sahduashufa's avatar
sahduashufa 已提交
34
git clone git@github.com:AllenZYJ/Edge-Computing-Engine.git
sahduashufa's avatar
sahduashufa 已提交
35

sahduashufa's avatar
sahduashufa 已提交
36
cd to this dir `
sahduashufa's avatar
1.0  
sahduashufa 已提交
37

sahduashufa's avatar
sahduashufa 已提交
38
进入root目录:
sahduashufa's avatar
1.0  
sahduashufa 已提交
39

sahduashufa's avatar
sahduashufa 已提交
40
执行
sahduashufa's avatar
1.0  
sahduashufa 已提交
41

sahduashufa's avatar
sahduashufa 已提交
42 43 44 45
```
make
make install
```
sahduashufa's avatar
1.0  
sahduashufa 已提交
46 47 48

编译demo入口程序

sahduashufa's avatar
sahduashufa 已提交
49 50 51
```shell
➜  edge-computing-engine git:(master) ✗ g++ main.cpp -o ma -lautodiff
```
sahduashufa's avatar
1.0  
sahduashufa 已提交
52

sahduashufa's avatar
sahduashufa 已提交
53 54
或者BP测试程序

sahduashufa's avatar
sahduashufa 已提交
55 56 57
```shell
➜  edge-computing-engine git:(master) ✗ g++ nerual_network.cpp -o ma
```
sahduashufa's avatar
sahduashufa 已提交
58 59 60

运行

sahduashufa's avatar
sahduashufa 已提交
61 62 63
```shell
➜  edge-computing-engine git:(master) ✗ ./main
```
sahduashufa's avatar
1.0  
sahduashufa 已提交
64

sahduashufa's avatar
sahduashufa 已提交
65
## 新的demo程序实现5层全连接层,可自定义神经元和激活函数,损失函数
sahduashufa's avatar
sahduashufa 已提交
66

sahduashufa's avatar
sahduashufa 已提交
67 68 69 70 71 72 73
全连接层使用方法:

第一层的权重自定义,而后调用forward函数前向传播一层,自动求出激活以后的值,激活函数可自定义.

首先定义一个权重矩阵和偏置矩阵,第一个矩阵的维度大小使用数据列去定义:

```c
sahduashufa's avatar
sahduashufa 已提交
74 75
Matrix bias1 = CreateRandMat(2,1);
Matrix weight1 = CreateRandMat(2,data.col);
sahduashufa's avatar
sahduashufa 已提交
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
```

之后可以输出第一层前向传播的值,同时可以定义下一层的bias的维度, row使用第一层的权重矩阵的行,第二层的权重矩阵的行使用了第一层的输出的行, 而列自行定义即可, 这一点体现了前向传播算法的维度相容. 也就是:

```c
Matrix output1 = sequaltial.forward(get_T(get_row(data_mine,index)),weight1,bias1);
```

```c
Matrix weight2 = CreateRandMat(output1.row,2);
Matrix bias2 = CreateRandMat(weight2.row,1);
Matrix output2 = sequaltial.forward(output1,weight2,bias2);
```

同时第二层的输出也可以求出来,以此类推 .

最终输出代码见nerual_test.cpp ![nerual_test1](/Users/zhangyiji/Documents/code/cpp_demo/my_os/Edge-Computing-Engine/picture/nerual_test1.png)

sahduashufa's avatar
sahduashufa 已提交
94
代码:
sahduashufa's avatar
sahduashufa 已提交
95

sahduashufa's avatar
sahduashufa 已提交
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
```c
Matrix data_mine = CreateRandMat(2,1);
Matrix label = CreateMatrix(2,1);
Matrix weight1 = CreateRandMat(2,2);
Matrix weight2 = CreateRandMat(2,2);
Matrix weight3 = CreateRandMat(2,2);
Matrix weight4 = CreateRandMat(2,2);
for(int epoch = 0;epoch<20;epoch++)
{
cout_mat(weight1);
edge_network sequaltial(2,2);

Matrix output1 = sequaltial.forward(data_mine,weight1);
Matrix output2 = sequaltial.forward(output1,weight2);
Matrix output3 = sequaltial.forward(output2,weight3);
Matrix output4 = sequaltial.forward(output3,weight4);
Matrix output_end = sequaltial.end_layer_backward(label,output4);
//get the forward
Matrix backward1 = sequaltial.backward(output_end,output3,weight4);
Matrix grad_w1w2 = mul_simple(backward1,data_mine);
Matrix backward2 = sequaltial.backward(backward1,output2,weight3);
Matrix grad_w3w4 = mul_simple(backward2,data_mine);
Matrix backward3 = sequaltial.backward(backward2,output1,weight2);
Matrix grad_w5w6 = mul_simple(backward3,data_mine);
Matrix backward4 = sequaltial.backward(backward3,output4,weight1);
Matrix grad_w7w8 = mul_simple(backward4,data_mine);
weight1 = subtract(weight1,times_mat(0.0001,padding(grad_w1w2,2,2)));
weight2 = subtract(weight2,times_mat(0.0001,padding(grad_w3w4,2,2)));
weight3 = subtract(weight3,times_mat(0.0001,padding(grad_w5w6,2,2)));
weight4 = subtract(weight4,times_mat(0.0001,padding(grad_w7w8,2,2)));
}
```
sahduashufa's avatar
sahduashufa 已提交
128
```shell
sahduashufa's avatar
sahduashufa 已提交
129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193
---------epoch: 0------------
loss: 4.65667
loss: 3.28273
---------epoch: 1------------
loss: 4.65655
loss: 3.28265
---------epoch: 2------------
loss: 4.65643
loss: 3.28257
---------epoch: 3------------
loss: 4.65631
loss: 3.28249
---------epoch: 4------------
loss: 4.65619
loss: 3.2824
---------epoch: 5------------
loss: 4.65607
loss: 3.28232
---------epoch: 6------------
loss: 4.65596
loss: 3.28224
---------epoch: 7------------
loss: 4.65584
loss: 3.28216
---------epoch: 8------------
loss: 4.65572
loss: 3.28208
---------epoch: 9------------
loss: 4.6556
loss: 3.282
---------epoch: 10------------
loss: 4.65548
loss: 3.28192
---------epoch: 11------------
loss: 4.65536
loss: 3.28184
---------epoch: 12------------
loss: 4.65524
loss: 3.28176
---------epoch: 13------------
loss: 4.65512
loss: 3.28168
---------epoch: 14------------
loss: 4.65501
loss: 3.2816
---------epoch: 15------------
loss: 4.65489
loss: 3.28152
---------epoch: 16------------
loss: 4.65477
loss: 3.28144
---------epoch: 17------------
loss: 4.65465
loss: 3.28136
---------epoch: 18------------
loss: 4.65453
loss: 3.28128
---------epoch: 19------------
loss: 4.65441
loss: 3.2812
```

## Bp反向传播的demo程序基于Pytorch官方代码模拟实现测试

迭代结果 :
sahduashufa's avatar
sahduashufa 已提交
194 195 196 197 198 199 200 201 202

W1: 0.6944 1.52368
	-1.46644 -0.154097
W2: 1.10079
	0.462984
loss: 0.559269

epoch:100 , 可自行测试.

sahduashufa's avatar
1.0  
sahduashufa 已提交
203 204 205 206 207 208 209 210 211 212
输出最终损失和参数迭代结果.

-----------split-line-----------
2.79955
0.36431
-0.451694
epoch: 100 error: 6.05895
-----------split-line-----------
0.009167(sum of loss)

sahduashufa's avatar
1.0  
sahduashufa 已提交
213
### 目前实现的程序接口
sahduashufa's avatar
sahduashufa 已提交
214 215

### API:
sahduashufa's avatar
light  
sahduashufa 已提交
216

sahduashufa's avatar
1.0  
sahduashufa 已提交
217 218
- [x] Matrix read_csv(string &file_path)读取格式化文件(csv),返回一个自动计算长度的矩阵.

sahduashufa's avatar
sahduashufa 已提交
219 220 221 222 223 224
- [x] 实现格式化文件写入接口.比较pandas.to_csv.

- [x] 矩阵广播机制,实现padding接口

- [x] 全连接层前向传播和反向传播接口,支持自动求导

sahduashufa's avatar
sahduashufa 已提交
225 226
- [x] 矩阵微分和自动求导接口封装

sahduashufa's avatar
sahduashufa 已提交
227 228
- [x] int save_txt(Matrix mid1,string path = "./",string delimiter = ",",string header="./") 设计文件流获取文件头部接口 , 写入格式化文件 , 已设计支持矩阵类型数据写入,支持自定义表头,写入文件路径 , 自定义分隔符,默认为" , ".

sahduashufa's avatar
1.0  
sahduashufa 已提交
229
- [x] Create a matrix :  create(row,cols)开辟一个矩阵结构的内存,元素初值为0;
sahduashufa's avatar
1.0  
sahduashufa 已提交
230 231 232 233 234

- [x] Change the element for matrix void move_ele(int &ele1, int &ele2),修改某一个位置的元素的值.

- [x] Matrix1+Matrix2 : Matrix add(Matrix mid1,Matrix mid2,int flag=1),矩阵加和操作接口,可选位运算加速.

sahduashufa's avatar
sahduashufa 已提交
235
- [x] Flag is how to compete the ele ,default 1 ,bitwise operation(位运算加速).
sahduashufa's avatar
1.0  
sahduashufa 已提交
236

sahduashufa's avatar
sahduashufa 已提交
237
- [x] Matrix1-Matrix2 : Matrix subtract(Matrix mid1,Matrix mid2)
sahduashufa's avatar
1.0  
sahduashufa 已提交
238

sahduashufa's avatar
sahduashufa 已提交
239
- [x] Matrix1*Matrix2 : Matrix mul(Matrix mid1,Matrix mid2)
sahduashufa's avatar
1.0  
sahduashufa 已提交
240

sahduashufa's avatar
sahduashufa 已提交
241
- [x] Matrix1*n : Matrix times_mat(int times,Matrix mid1)
sahduashufa's avatar
1.0  
sahduashufa 已提交
242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260

- [x] Matrix1's Transposition : Matrix get_T(Matrix mid1)矩阵转置

- [x] Mul(matrix1,matrix2)矩阵乘积(完整数学定义).

- [x] double* flatten(Matrix mid1) : Return a flattened array.矩阵展开

- [x] Matrix matrix_rs(Matrix mid1,int rs_row,int rs_col) 矩阵的结构压缩

- [x] double matrix_sum(Matrix mid1)矩阵求和

- [x] double matrix_mean(Matrix mid1)均值

- [x] Matrix appply(Matrix mid1,Matrix mid2,int axis = 0)矩阵拼接

- [x] Matrix iloc(Matrix mid1,int start_x=0,int end_x=0,int start_y=0,int end_y=0)矩阵切片

- [x] Matrix mul_simple(Matrix mid1,Matrix mid2)为了贴合机器学习的需要,实现了矩阵对应元素相乘,请与传统意义的矩阵乘法区分开.

sahduashufa's avatar
sahduashufa 已提交
261 262 263 264 265 266
- [x] Relu激活函数矩阵接口

- [x] 均方误差矩阵接口

- [x] 创建随机权重矩阵接口

sahduashufa's avatar
1.0  
sahduashufa 已提交
267 268
  ### 即将着手开发:

sahduashufa's avatar
1.0  
sahduashufa 已提交
269 270
- [ ] 卷积神经网络定义(包括但不限于卷积核,池化层定义,自定义损失接口).

sahduashufa's avatar
1.0  
sahduashufa 已提交
271
- [ ] 随机森林算法封装.
sahduashufa's avatar
1.0  
sahduashufa 已提交
272

sahduashufa's avatar
1.0  
sahduashufa 已提交
273
- [ ] 主流网络架构实现.
sahduashufa's avatar
1.0  
sahduashufa 已提交
274 275 276

  

sahduashufa's avatar
sahduashufa 已提交
277 278
## 反向传播测试demo:

sahduashufa's avatar
sahduashufa 已提交
279 280
```c
#include<iostream>
sahduashufa's avatar
sahduashufa 已提交
281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329
#include<ctime>
#include<string>
#include<time.h>
#include<math.h>
#include<fstream>
#include<stdlib.h>
#include"./matrix/matrix_def.h"
#include"./matrix/matrix_pro.h"
#include"./welcome/score_wel.cpp"
#include"./logistic/logistic_def.h"
#include"./file_pro/data_read.h"
using namespace std;
clock_t start, stop;
double duration;
int main()
{
	welcome();	
	string path = "./data/nerual_data.csv";
	Matrix data = read_csv(path);
	Matrix bais = CreateMatrix(data.row,1);		
	Matrix x = iloc(data,0,100,0,2);
	Matrix y = iloc(data,0,100,2,3);
	int N=100,in_Dim=2,H_num=2,out_Dim=2;
	double learning_rate = 0.0001;
	Matrix W1 = CreateRandMat(in_Dim,H_num);
	Matrix W2 = CreateRandMat(H_num,out_Dim);
	cout_mat(W1);
	cout_mat(W2);
	for(int epoch = 0;epoch<100;epoch++)
	{
		Matrix x_w1 = mul(x,W1);
		Matrix re = mat_relu(x_w1);
		Matrix out = mul(re,W2);
		Matrix mat_sq = mat_sq_loss(out,y);
		Matrix grad_y_pred = times_mat(2.0,subtract(out,y));
		Matrix grad_w2 = mul(get_T(re),grad_y_pred);
		Matrix grad_h_relu = mul(grad_y_pred,get_T(W2));
		Matrix grad_h_relu_copy = mat_relu(grad_h_relu);
		Matrix grad_w1 = mul(get_T(x),grad_h_relu_copy);
		Matrix dw1 = times_mat(learning_rate,mul(get_T(x),grad_h_relu_copy));
		W1 = subtract(W1,dw1);
		W2 = subtract(W2,times_mat(learning_rate,grad_w2));
		cout<<"W1: ";
		cout_mat(W1);
		cout<<"W2: ";
		cout_mat(W2);
		cout<<"loss"<<": ";
		cout<<matrix_sum(mat_sq)/100<<endl;
	}
sahduashufa's avatar
sahduashufa 已提交
330 331 332 333
}
```


sahduashufa's avatar
sahduashufa 已提交
334

sahduashufa's avatar
1.0  
sahduashufa 已提交
335
## 演示:矩阵乘法
sahduashufa's avatar
1.0  
sahduashufa 已提交
336

sahduashufa's avatar
sahduashufa 已提交
337
Matrix **A**:
sahduashufa's avatar
1.05  
sahduashufa 已提交
338

sahduashufa's avatar
sahduashufa 已提交
339 340 341 342 343 344 345 346
| 第1列   | 第2列   | 第3列   | 第4列   | 第5列   |
| ------- | ------- | ------- | ------- | ------- |
| 72.0000 | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
| 0.0000  | 64.0000 | 0.0000  | 0.0000  | 0.0000  |
| 16.0000 | 8.0000  | 0.0000  | 0.0000  | 0.0000  |
| 0.0000  | 0.0000  | 56.0000 | 16.0000 | 32.0000 |
| 0.0000  | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
| 0.0000  | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
sahduashufa's avatar
sahduashufa 已提交
347

sahduashufa's avatar
sahduashufa 已提交
348
MAtrix **B**:
sahduashufa's avatar
1.05  
sahduashufa 已提交
349

sahduashufa's avatar
sahduashufa 已提交
350 351 352 353 354 355 356 357
| 第1列   | 第2列   | 第3列   | 第4列   | 第5列  | 第6列  |
| ------- | ------- | ------- | ------- | ------ | ------ |
| 72.0000 | 0.0000  | 16.0000 | 0.0000  | 0.0000 | 0.0000 |
| 0.0000  | 64.0000 | 8.0000  | 0.0000  | 0.0000 | 0.0000 |
| 0.0000  | 0.0000  | 0.0000  | 56.0000 | 0.0000 | 0.0000 |
| 0.0000  | 0.0000  | 0.0000  | 16.0000 | 0.0000 | 0.0000 |
| 0.0000  | 0.0000  | 0.0000  | 32.0000 | 0.0000 | 0.0000 |

sahduashufa's avatar
sahduashufa 已提交
358
To
sahduashufa's avatar
1.05  
sahduashufa 已提交
359

sahduashufa's avatar
sahduashufa 已提交
360 361 362 363 364 365 366 367
| 第1列     | 第2列     | 第3列     | 第4列     | 第5列  | 第6列  |
| --------- | --------- | --------- | --------- | ------ | ------ |
| 5184.0000 | 0.0000    | 1152.0000 | 0.0000    | 0.0000 | 0.0000 |
| 0.0000    | 4096.0000 | 512.0000  | 0.0000    | 0.0000 | 0.0000 |
| 1152.0000 | 512.0000  | 320.0000  | 0.0000    | 0.0000 | 0.0000 |
| 0.0000    | 0.0000    | 0.0000    | 4416.0000 | 0.0000 | 0.0000 |
| 0.0000    | 0.0000    | 0.0000    | 0.0000    | 0.0000 | 0.0000 |
| 0.0000    | 0.0000    | 0.0000    | 0.0000    | 0.0000 | 0.0000 |
sahduashufa's avatar
1.05  
sahduashufa 已提交
368

sahduashufa's avatar
1.0  
sahduashufa 已提交
369 370 371
## 演示: 矩阵展开(flatten).

double* flatten(Matrix mid1)
sahduashufa's avatar
sahduashufa 已提交
372 373 374 375 376 377 378 379 380 381 382 383 384 385

|  1   |  2   |  3   |
| :--: | :--: | :--: |
|  2   |  4   |  6   |
|  7   |  8   |  9   |

​	To

| 1    | 2    | 3    | 2    | 4    | 6    | 7    | 8    | 9                  |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | :----------------- |
|      |      |      |      |      |      |      |      | Like numpy.flatten |

function:

sahduashufa's avatar
1.0  
sahduashufa 已提交
386 387
## 演示: 邻接矩阵的参数定义:

sahduashufa's avatar
sahduashufa 已提交
388 389 390
​	Matrix appply(Matrix mid1,Matrix mid2,int axis = 0)

> 参数 axis=0 :
sahduashufa's avatar
1.05  
sahduashufa 已提交
391

sahduashufa's avatar
sahduashufa 已提交
392 393 394 395 396 397 398 399 400
| 0    | 7    | 2    |
| ---- | ---- | ---- |
| 0    | 3    | 1    |
| 0    | 0    | 0    |
| 0    | 0    | 11   |
| 0    | 7    | 2    |
| 0    | 3    | 1    |
| 0    | 0    | 0    |
| 0    | 0    | 11   |
sahduashufa's avatar
sahduashufa 已提交
401 402 403
------

> axis = 1:
sahduashufa's avatar
1.05  
sahduashufa 已提交
404

sahduashufa's avatar
sahduashufa 已提交
405 406 407 408 409
| 0    | 7    | 2    | 0    | 7    | 2    |
| ---- | ---- | ---- | ---- | ---- | ---- |
| 0    | 3    | 1    | 0    | 3    | 1    |
| 0    | 0    | 0    | 0    | 0    | 0    |
| 0    | 0    | 11   | 0    | 0    | 11   |
sahduashufa's avatar
1.05  
sahduashufa 已提交
410 411 412

------

sahduashufa's avatar
1.0  
sahduashufa 已提交
413
## 更新2019/11/18/00:12
sahduashufa's avatar
sahduashufa 已提交
414

sahduashufa's avatar
1.0  
sahduashufa 已提交
415
- [x] read_csv 通过文件流读取逗号分隔符文件,返回一个自动计算长度的矩阵.
sahduashufa's avatar
sahduashufa 已提交
416

sahduashufa's avatar
1.0  
sahduashufa 已提交
417
  例如 CSV's head :
sahduashufa's avatar
1.5  
sahduashufa 已提交
418

sahduashufa's avatar
sahduashufa 已提交
419 420 421 422 423 424 425
  | -0.017612 | 14.053064 | 0    |
  | --------- | --------- | ---- |
  | -1.395634 | 4.662541  | 1    |
  | -0.752157 | 6.53862   | 0    |
  | -1.322371 | 7.152853  | 0    |
  | 0.423363  | 11.054677 | 0    |
  | 0.406704  | 7.067335  | 1    |
sahduashufa's avatar
1.5  
sahduashufa 已提交
426

sahduashufa's avatar
sahduashufa 已提交
427
  Get:
sahduashufa's avatar
sahduashufa 已提交
428

sahduashufa's avatar
1.4  
sahduashufa 已提交
429
  ![](./picture/WX20191119-105411@2x.png)
sahduashufa's avatar
sahduashufa 已提交
430

sahduashufa's avatar
sahduashufa 已提交
431
  
sahduashufa's avatar
1.0  
sahduashufa 已提交
432

sahduashufa's avatar
1.0  
sahduashufa 已提交
433
  ## Logistic Regression demo base Edge:
sahduashufa's avatar
1.0  
sahduashufa 已提交
434

sahduashufa's avatar
1.4  
sahduashufa 已提交
435

sahduashufa's avatar
1.0  
sahduashufa 已提交
436
```c
sahduashufa's avatar
sahduashufa 已提交
437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452
#include<iostream>
#include<ctime>
#include<string>
#include <time.h>
#include <math.h>
#include <fstream>
#include"./matrix/matrix_def.h"
#include"./matrix/matrix_pro.h"
#include"./welcome/score_wel.cpp"
#include"./logistic/logistic_def.h"
#include"./file_pro/data_read.h"
using namespace std;
clock_t start, stop;
double duration;
int main()
{
sahduashufa's avatar
1.0  
sahduashufa 已提交
453 454
	welcome();	
	string path = "./new_data2.csv";
sahduashufa's avatar
sahduashufa 已提交
455
	Matrix data = read_csv(path);
sahduashufa's avatar
1.0  
sahduashufa 已提交
456 457 458 459 460 461 462 463 464 465 466 467 468
	Matrix bais = CreateMatrix(data.row,1);		
	data = appply(data,bais,1);
	Matrix y = iloc(data,0,0,3,4);
	Matrix x_1 = iloc(data,0,0,0,3);
	Matrix x_2 = get_T(x_1);
	double alpha = 0.002;
	int max_epoch = 100;
	Matrix weight = CreateMatrix(3,1);
	change_va(weight,0,0,1);
	change_va(weight,1,0,1);
	change_va(weight,2,0,1);
	int epoch = 0;
	for(epoch = 0;epoch<=max_epoch;epoch++)
sahduashufa's avatar
1.0  
sahduashufa 已提交
469
	{
sahduashufa's avatar
1.0  
sahduashufa 已提交
470 471 472 473 474 475 476 477 478
	cout<<"-----------split-line-----------"<<endl;			
	Matrix temp_mul = mul(x_1,weight);
	Matrix h =e_sigmoid(temp_mul);
	Matrix error = subtract(y,h);
	Matrix temp_update = mul(x_2,error);
	Matrix updata = add(weight,times_mat(alpha,temp_update),0);
	cout_mat(weight);
	cout<<"epoch: "<<epoch<<" error: "<<matrix_sum(error)<<endl;
	cout<<"-----------split-line-----------"<<endl;	
sahduashufa's avatar
1.0  
sahduashufa 已提交
479 480
	}
	stop = clock();
sahduashufa's avatar
1.0  
sahduashufa 已提交
481
  printf("%f\n", (double)(stop - start) / CLOCKS_PER_SEC);
sahduashufa's avatar
1.0  
sahduashufa 已提交
482
	return 0;
sahduashufa's avatar
sahduashufa 已提交
483
}
sahduashufa's avatar
1.0  
sahduashufa 已提交
484
```
sahduashufa's avatar
sahduashufa 已提交
485 486
Something :

sahduashufa's avatar
1.0  
sahduashufa 已提交
487 488 489 490 491 492
> 1. 矩阵元素默认为1
> 2. 使用位运算加速防止填充过大的数值,但是会损失一定精度,慎用.
> 3. 记得delete(matrix)在你使用完一个矩阵计算单元以后.
> 4. api接口更多的接近于pandas和numpy的使用习惯.
> 5. 更多的细节参见目前最新的代码
> 6. 欢迎star和关注.
sahduashufa's avatar
sahduashufa 已提交
493
> 7. autodiff部分感谢国外博主Omar的思路提醒.
sahduashufa's avatar
readme  
sahduashufa 已提交
494
>
A
AllenZYJ 已提交
495

sahduashufa's avatar
sahduashufa 已提交
496 497 498 499 500 501 502
------



<div align = center><img src = './picture/星月.svg'></div>


A
AllenZYJ 已提交
503

sahduashufa's avatar
sahduashufa 已提交
504
 个人小站:[极度空间](http://likedge.top/)
A
AllenZYJ 已提交
505

sahduashufa's avatar
1.5  
sahduashufa 已提交
506
作者邮箱:zk@likedge.top | edge@ibooker.org.cn
A
AllenZYJ 已提交
507

sahduashufa's avatar
sahduashufa 已提交
508
 QQ:2533524298