Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
whqwjb
go-ethereum
提交
8906b2fe
G
go-ethereum
项目概览
whqwjb
/
go-ethereum
与 Fork 源项目一致
从无法访问的项目Fork
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
G
go-ethereum
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
8906b2fe
编写于
5月 17, 2016
作者:
P
Péter Szilágyi
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
eth/downloader: fix reviewer comments
上级
e86619e7
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
32 addition
and
9 deletion
+32
-9
eth/downloader/downloader.go
eth/downloader/downloader.go
+30
-7
eth/downloader/queue.go
eth/downloader/queue.go
+2
-2
未找到文件。
eth/downloader/downloader.go
浏览文件 @
8906b2fe
...
...
@@ -65,7 +65,7 @@ var (
maxQueuedHashes
=
32
*
1024
// [eth/61] Maximum number of hashes to queue for import (DOS protection)
maxQueuedHeaders
=
32
*
1024
// [eth/62] Maximum number of headers to queue for import (DOS protection)
maxHeadersProcess
=
2048
// Number of header download results to import at once into the chain
maxResultsProcess
=
4096
// Number of content download results to import at once into the chain
maxResultsProcess
=
2048
// Number of content download results to import at once into the chain
fsHeaderCheckFrequency
=
100
// Verification frequency of the downloaded headers during fast sync
fsHeaderSafetyNet
=
2048
// Number of headers to discard in case a chain violation is detected
...
...
@@ -716,9 +716,9 @@ func (d *Downloader) fetchHashes61(p *peer, td *big.Int, from uint64) error {
getHashes
:=
func
(
from
uint64
)
{
glog
.
V
(
logger
.
Detail
)
.
Infof
(
"%v: fetching %d hashes from #%d"
,
p
,
MaxHashFetch
,
from
)
go
p
.
getAbsHashes
(
from
,
MaxHashFetch
)
request
=
time
.
Now
()
timeout
.
Reset
(
hashTTL
)
go
p
.
getAbsHashes
(
from
,
MaxHashFetch
)
}
// Start pulling hashes, until all are exhausted
getHashes
(
from
)
...
...
@@ -1168,7 +1168,7 @@ func (d *Downloader) findAncestor(p *peer, height uint64) (uint64, error) {
// facilitate concurrency but still protect against malicious nodes sending bad
// headers, we construct a header chain skeleton using the "origin" peer we are
// syncing with, and fill in the missing headers using anyone else. Headers from
// other peers are only accepted if they map cleanly to the skeleton. If noone
// other peers are only accepted if they map cleanly to the skeleton. If no
one
// can fill in the skeleton - not even the origin peer - it's assumed invalid and
// the origin is dropped.
func
(
d
*
Downloader
)
fetchHeaders
(
p
*
peer
,
from
uint64
)
error
{
...
...
@@ -1183,6 +1183,9 @@ func (d *Downloader) fetchHeaders(p *peer, from uint64) error {
defer
timeout
.
Stop
()
getHeaders
:=
func
(
from
uint64
)
{
request
=
time
.
Now
()
timeout
.
Reset
(
headerTTL
)
if
skeleton
{
glog
.
V
(
logger
.
Detail
)
.
Infof
(
"%v: fetching %d skeleton headers from #%d"
,
p
,
MaxHeaderFetch
,
from
)
go
p
.
getAbsHeaders
(
from
+
uint64
(
MaxHeaderFetch
)
-
1
,
MaxSkeletonSize
,
MaxHeaderFetch
-
1
,
false
)
...
...
@@ -1190,8 +1193,6 @@ func (d *Downloader) fetchHeaders(p *peer, from uint64) error {
glog
.
V
(
logger
.
Detail
)
.
Infof
(
"%v: fetching %d full headers from #%d"
,
p
,
MaxHeaderFetch
,
from
)
go
p
.
getAbsHeaders
(
from
,
MaxHeaderFetch
,
0
,
false
)
}
request
=
time
.
Now
()
timeout
.
Reset
(
headerTTL
)
}
// Start pulling the header chain skeleton until all is done
getHeaders
(
from
)
...
...
@@ -1413,6 +1414,28 @@ func (d *Downloader) fetchNodeData() error {
// fetchParts iteratively downloads scheduled block parts, taking any available
// peers, reserving a chunk of fetch requests for each, waiting for delivery and
// also periodically checking for timeouts.
//
// As the scheduling/timeout logic mostly is the same for all downloaded data
// types, this method is used by each for data gathering and is instrumented with
// various callbacks to handle the slight differences between processing them.
//
// The instrumentation parameters:
// - errCancel: error type to return if the fetch operation is cancelled (mostly makes logging nicer)
// - deliveryCh: channel from which to retrieve downloaded data packets (merged from all concurrent peers)
// - deliver: processing callback to deliver data packets into type specific download queues (usually within `queue`)
// - wakeCh: notification channel for waking the fetcher when new tasks are available (or sync completed)
// - expire: task callback method to abort requests that took too long and return the faulty peers (traffic shaping)
// - pending: task callback for the number of requests still needing download (detect completion/non-completability)
// - inFlight: task callback for the number of in-progress requests (wait for all active downloads to finish)
// - throttle: task callback to check if the processing queue is full and activate throttling (bound memory use)
// - reserve: task callback to reserve new download tasks to a particular peer (also signals partial completions)
// - fetchHook: tester callback to notify of new tasks being initiated (allows testing the scheduling logic)
// - fetch: network callback to actually send a particular download request to a physical remote peer
// - cancel: task callback to abort an in-flight download request and allow rescheduling it (in case of lost peer)
// - capacity: network callback to retreive the estimated type-specific bandwidth capacity of a peer (traffic shaping)
// - idle: network callback to retrieve the currently (type specific) idle peers that can be assigned tasks
// - setIdle: network callback to set a peer back to idle and update its estimated capacity (traffic shaping)
// - kind: textual label of the type being downloaded to display in log mesages
func
(
d
*
Downloader
)
fetchParts
(
errCancel
error
,
deliveryCh
chan
dataPack
,
deliver
func
(
dataPack
)
(
int
,
error
),
wakeCh
chan
bool
,
expire
func
()
map
[
string
]
int
,
pending
func
()
int
,
inFlight
func
()
bool
,
throttle
func
()
bool
,
reserve
func
(
*
peer
,
int
)
(
*
fetchRequest
,
bool
,
error
),
fetchHook
func
([]
*
types
.
Header
),
fetch
func
(
*
peer
,
*
fetchRequest
)
error
,
cancel
func
(
*
fetchRequest
),
capacity
func
(
*
peer
)
int
,
...
...
@@ -1581,10 +1604,10 @@ func (d *Downloader) processHeaders(origin uint64, td *big.Int) error {
for
i
,
header
:=
range
rollback
{
hashes
[
i
]
=
header
.
Hash
()
}
l
h
,
lfb
,
lb
:=
d
.
headHeader
()
.
Number
,
d
.
headFastBlock
()
.
Number
(),
d
.
headBlock
()
.
Number
()
l
astHeader
,
lastFastBlock
,
lastBlock
:=
d
.
headHeader
()
.
Number
,
d
.
headFastBlock
()
.
Number
(),
d
.
headBlock
()
.
Number
()
d
.
rollback
(
hashes
)
glog
.
V
(
logger
.
Warn
)
.
Infof
(
"Rolled back %d headers (LH: %d->%d, FB: %d->%d, LB: %d->%d)"
,
len
(
hashes
),
l
h
,
d
.
headHeader
()
.
Number
,
lfb
,
d
.
headFastBlock
()
.
Number
(),
lb
,
d
.
headBlock
()
.
Number
())
len
(
hashes
),
l
astHeader
,
d
.
headHeader
()
.
Number
,
lastFastBlock
,
d
.
headFastBlock
()
.
Number
(),
lastBlock
,
d
.
headBlock
()
.
Number
())
// If we're already past the pivot point, this could be an attack, disable fast sync
if
rollback
[
len
(
rollback
)
-
1
]
.
Number
.
Uint64
()
>
pivot
{
...
...
eth/downloader/queue.go
浏览文件 @
8906b2fe
...
...
@@ -39,8 +39,8 @@ import (
)
var
(
blockCacheLimit
=
16384
// Maximum number of blocks to cache before throttling the download
maxInFlightStates
=
8192
// Maximum number of state downloads to allow concurrently
blockCacheLimit
=
8192
// Maximum number of blocks to cache before throttling the download
maxInFlightStates
=
8192
// Maximum number of state downloads to allow concurrently
)
var
(
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录