Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
doodoocoder
prometheus
提交
19c5eb61
P
prometheus
项目概览
doodoocoder
/
prometheus
与 Fork 源项目一致
从无法访问的项目Fork
通知
2
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
prometheus
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
19c5eb61
编写于
3月 15, 2016
作者:
F
Fabian Reinartz
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1486 from prometheus/instrument-scrape-pool-sync
Instrument scrape pool `sync()`
上级
813f61e5
dbe5d18b
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
25 addition
and
0 deletion
+25
-0
retrieval/scrape.go
retrieval/scrape.go
+25
-0
未找到文件。
retrieval/scrape.go
浏览文件 @
19c5eb61
...
...
@@ -43,6 +43,7 @@ const (
// Constants for instrumentation.
namespace
=
"prometheus"
interval
=
"interval"
scrapeJob
=
"scrape_job"
)
var
(
...
...
@@ -74,12 +75,31 @@ var (
},
[]
string
{
interval
},
)
targetSyncIntervalLength
=
prometheus
.
NewSummaryVec
(
prometheus
.
SummaryOpts
{
Namespace
:
namespace
,
Name
:
"target_sync_length_seconds"
,
Help
:
"Actual interval to sync the scrape pool."
,
Objectives
:
map
[
float64
]
float64
{
0.01
:
0.001
,
0.05
:
0.005
,
0.5
:
0.05
,
0.90
:
0.01
,
0.99
:
0.001
},
},
[]
string
{
scrapeJob
},
)
targetScrapePoolSyncsCounter
=
prometheus
.
NewCounterVec
(
prometheus
.
CounterOpts
{
Namespace
:
namespace
,
Name
:
"target_scrape_pool_sync_total"
,
Help
:
"Total number of syncs that were executed on a scrape pool."
,
},
[]
string
{
scrapeJob
},
)
)
func
init
()
{
prometheus
.
MustRegister
(
targetIntervalLength
)
prometheus
.
MustRegister
(
targetSkippedScrapes
)
prometheus
.
MustRegister
(
targetReloadIntervalLength
)
prometheus
.
MustRegister
(
targetSyncIntervalLength
)
prometheus
.
MustRegister
(
targetScrapePoolSyncsCounter
)
}
// scrapePool manages scrapes for sets of targets.
...
...
@@ -188,6 +208,7 @@ func (sp *scrapePool) reload(cfg *config.ScrapeConfig) {
// scrape loops for new targets, and stops scrape loops for disappeared targets.
// It returns after all stopped scrape loops terminated.
func
(
sp
*
scrapePool
)
sync
(
targets
[]
*
Target
)
{
start
:=
time
.
Now
()
sp
.
mtx
.
Lock
()
defer
sp
.
mtx
.
Unlock
()
...
...
@@ -233,6 +254,10 @@ func (sp *scrapePool) sync(targets []*Target) {
// may be active and tries to insert. The old scraper that didn't terminate yet could still
// be inserting a previous sample set.
wg
.
Wait
()
targetSyncIntervalLength
.
WithLabelValues
(
sp
.
config
.
JobName
)
.
Observe
(
float64
(
time
.
Since
(
start
))
/
float64
(
time
.
Second
),
)
targetScrapePoolSyncsCounter
.
WithLabelValues
(
sp
.
config
.
JobName
)
.
Inc
()
}
// sampleAppender returns an appender for ingested samples from the target.
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录