Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openeuler
A-Tune
提交
e09feae2
A
A-Tune
项目概览
openeuler
/
A-Tune
通知
5
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
A-Tune
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
e09feae2
编写于
1月 15, 2020
作者:
O
openeuler-inno-dev
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
atune: add feature importance function for tunning
Signed-off-by:
N
openeuler-inno-dev
<
openeulerbjdev@huawei.com
>
上级
5753b40b
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
20 addition
and
0 deletion
+20
-0
analysis/optimizer/optimizer.py
analysis/optimizer/optimizer.py
+20
-0
未找到文件。
analysis/optimizer/optimizer.py
浏览文件 @
e09feae2
...
@@ -19,6 +19,8 @@ import logging
...
@@ -19,6 +19,8 @@ import logging
from
multiprocessing
import
Process
from
multiprocessing
import
Process
import
numpy
as
np
import
numpy
as
np
from
skopt.optimizer
import
gp_minimize
from
skopt.optimizer
import
gp_minimize
from
sklearn.linear_model
import
Lasso
from
sklearn.preprocessing
import
StandardScaler
LOGGER
=
logging
.
getLogger
(
__name__
)
LOGGER
=
logging
.
getLogger
(
__name__
)
...
@@ -85,6 +87,17 @@ class Optimizer(Process):
...
@@ -85,6 +87,17 @@ class Optimizer(Process):
return
keys
return
keys
raise
ValueError
(
"the dtype of {} is not supported"
.
format
(
p_nob
[
'name'
]))
raise
ValueError
(
"the dtype of {} is not supported"
.
format
(
p_nob
[
'name'
]))
@
staticmethod
def
feature_importance
(
options
,
performance
,
labels
):
"""feature importance"""
options
=
StandardScaler
().
fit_transform
(
options
)
lasso
=
Lasso
()
lasso
.
fit
(
options
,
performance
)
result
=
zip
(
lasso
.
coef_
,
labels
)
result
=
sorted
(
result
,
key
=
lambda
x
:
-
np
.
abs
(
x
[
0
]))
rank
=
", "
.
join
(
"%s: %s"
%
(
label
,
round
(
coef
,
3
))
for
coef
,
label
in
result
)
return
rank
def
run
(
self
):
def
run
(
self
):
def
objective
(
var
):
def
objective
(
var
):
for
i
,
knob
in
enumerate
(
self
.
knobs
):
for
i
,
knob
in
enumerate
(
self
.
knobs
):
...
@@ -102,6 +115,9 @@ class Optimizer(Process):
...
@@ -102,6 +115,9 @@ class Optimizer(Process):
return
x_num
return
x_num
params
=
{}
params
=
{}
options
=
[]
performance
=
[]
labels
=
[]
try
:
try
:
LOGGER
.
info
(
"Running performance evaluation......."
)
LOGGER
.
info
(
"Running performance evaluation......."
)
ret
=
gp_minimize
(
objective
,
self
.
build_space
(),
n_calls
=
self
.
max_eval
,
x0
=
self
.
ref
)
ret
=
gp_minimize
(
objective
,
self
.
build_space
(),
n_calls
=
self
.
max_eval
,
x0
=
self
.
ref
)
...
@@ -116,9 +132,13 @@ class Optimizer(Process):
...
@@ -116,9 +132,13 @@ class Optimizer(Process):
params
[
knob
[
'name'
]]
=
knob
[
'options'
][
ret
.
x
[
i
]]
params
[
knob
[
'name'
]]
=
knob
[
'options'
][
ret
.
x
[
i
]]
else
:
else
:
params
[
knob
[
'name'
]]
=
ret
.
x
[
i
]
params
[
knob
[
'name'
]]
=
ret
.
x
[
i
]
labels
.
append
(
knob
[
'name'
])
self
.
child_conn
.
send
(
params
)
self
.
child_conn
.
send
(
params
)
LOGGER
.
info
(
"Optimized result: %s"
,
params
)
LOGGER
.
info
(
"Optimized result: %s"
,
params
)
LOGGER
.
info
(
"The optimized profile has been generated."
)
LOGGER
.
info
(
"The optimized profile has been generated."
)
rank
=
self
.
feature_importance
(
options
,
performance
,
labels
)
LOGGER
.
info
(
"The feature importances of current evaluation are: %s"
,
rank
)
return
params
return
params
def
stop_process
(
self
):
def
stop_process
(
self
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录