Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
YottaChain
YTBP
提交
c9332860
Y
YTBP
项目概览
YottaChain
/
YTBP
通知
0
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
Y
YTBP
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
c9332860
编写于
10月 10, 2018
作者:
T
Todd Fleming
浏览文件
操作
浏览文件
下载
差异文件
Merge remote-tracking branch 'origin/develop' into state-history-plugin
上级
fd022005
13d0a6c2
变更
15
显示空白变更内容
内联
并排
Showing
15 changed file
with
175 addition
and
172 deletion
+175
-172
.buildkite/long_running_tests.yml
.buildkite/long_running_tests.yml
+6
-24
.buildkite/pipeline.yml
.buildkite/pipeline.yml
+0
-36
libraries/chain/abi_serializer.cpp
libraries/chain/abi_serializer.cpp
+1
-1
plugins/COMMUNITY.md
plugins/COMMUNITY.md
+1
-0
programs/eosio-blocklog/main.cpp
programs/eosio-blocklog/main.cpp
+25
-13
tests/CMakeLists.txt
tests/CMakeLists.txt
+0
-2
tests/Cluster.py
tests/Cluster.py
+64
-21
tests/Node.py
tests/Node.py
+3
-53
tests/TestHelper.py
tests/TestHelper.py
+1
-0
tests/distributed-transactions-remote-test.py
tests/distributed-transactions-remote-test.py
+3
-3
tests/distributed-transactions-test.py
tests/distributed-transactions-test.py
+7
-3
tests/nodeos_run_remote_test.py
tests/nodeos_run_remote_test.py
+4
-3
tests/nodeos_run_test.py
tests/nodeos_run_test.py
+11
-8
tests/nodeos_under_min_avail_ram.py
tests/nodeos_under_min_avail_ram.py
+9
-0
tests/testUtils.py
tests/testUtils.py
+40
-5
未找到文件。
.buildkite/long_running_tests.yml
浏览文件 @
c9332860
...
...
@@ -97,9 +97,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
ln -s "$(pwd)" /data/job && cd /data/job/build && ctest -L long_running_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:darwin:
Tests"
agents
:
-
"
role=macos-tester"
...
...
@@ -107,7 +104,7 @@ steps:
-
"
mongod.log"
-
"
build/genesis.json"
-
"
build/config.ini"
timeout
:
6
0
timeout
:
10
0
-
command
:
|
echo "--- :arrow_down: Downloading build directory" && \
...
...
@@ -117,9 +114,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L long_running_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:ubuntu:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -131,7 +125,7 @@ steps:
docker#v1.4.0
:
image
:
"
eosio/ci:ubuntu"
workdir
:
/data/job
timeout
:
6
0
timeout
:
10
0
-
command
:
|
echo "--- :arrow_down: Downloading build directory" && \
...
...
@@ -141,9 +135,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L long_running_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:ubuntu:
18.04
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -155,7 +146,7 @@ steps:
docker#v1.4.0
:
image
:
"
eosio/ci:ubuntu18"
workdir
:
/data/job
timeout
:
6
0
timeout
:
10
0
-
command
:
|
echo "--- :arrow_down: Downloading build directory" && \
...
...
@@ -165,9 +156,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L long_running_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:fedora:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -179,7 +167,7 @@ steps:
docker#v1.4.0
:
image
:
"
eosio/ci:fedora"
workdir
:
/data/job
timeout
:
6
0
timeout
:
10
0
-
command
:
|
echo "--- :arrow_down: Downloading build directory" && \
...
...
@@ -189,9 +177,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L long_running_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:centos:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -203,7 +188,7 @@ steps:
docker#v1.4.0
:
image
:
"
eosio/ci:centos"
workdir
:
/data/job
timeout
:
6
0
timeout
:
10
0
-
command
:
|
echo "--- :arrow_down: Downloading build directory" && \
...
...
@@ -213,9 +198,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L long_running_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:aws:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -227,4 +209,4 @@ steps:
docker#v1.4.0
:
image
:
"
eosio/ci:amazonlinux"
workdir
:
/data/job
timeout
:
6
0
timeout
:
10
0
.buildkite/pipeline.yml
浏览文件 @
c9332860
...
...
@@ -97,9 +97,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
ln -s "$(pwd)" /data/job && cd /data/job/build && ctest -j8 -LE _tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:darwin:
Tests"
agents
:
-
"
role=macos-tester"
...
...
@@ -117,9 +114,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
ln -s "$(pwd)" /data/job && cd /data/job/build && ctest -L nonparallelizable_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:darwin:
NP
Tests"
agents
:
-
"
role=macos-tester"
...
...
@@ -137,9 +131,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -j8 -LE _tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:ubuntu:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -161,9 +152,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L nonparallelizable_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:ubuntu:
NP
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -185,9 +173,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -j8 -LE _tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:ubuntu:
18.04
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -209,9 +194,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L nonparallelizable_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:ubuntu:
18.04
NP
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -233,9 +215,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -j8 -LE _tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:fedora:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -257,9 +236,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L nonparallelizable_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:fedora:
NP
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -281,9 +257,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -j8 -LE _tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:centos:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -305,9 +278,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L nonparallelizable_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:centos:
NP
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -329,9 +299,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -j8 -LE _tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:aws:
Tests"
agents
:
-
"
role=linux-tester"
...
...
@@ -353,9 +320,6 @@ steps:
$(which mongod) --fork --logpath "$(pwd)"/mongod.log && \
echo "+++ :microscope: Running tests" && \
cd /data/job/build && ctest -L nonparallelizable_tests --output-on-failure
retry
:
automatic
:
limit
:
1
label
:
"
:aws:
NP
Tests"
agents
:
-
"
role=linux-tester"
...
...
libraries/chain/abi_serializer.cpp
浏览文件 @
c9332860
...
...
@@ -800,7 +800,7 @@ namespace eosio { namespace chain {
}
fc
::
scoped_exit
<
std
::
function
<
void
()
>>
variant_to_binary_context
::
disallow_extensions_unless
(
bool
condition
)
{
std
::
function
<
void
()
>
callback
=
[
old_
recursion_depth
=
recursion_depth
,
old_
allow_extensions
=
allow_extensions
,
this
](){
std
::
function
<
void
()
>
callback
=
[
old_allow_extensions
=
allow_extensions
,
this
](){
allow_extensions
=
old_allow_extensions
;
};
...
...
plugins/COMMUNITY.md
浏览文件 @
c9332860
...
...
@@ -6,6 +6,7 @@ Third parties are encouraged to make pull requests to this file (`develop` branc
| Description | URL |
| ----------- | --- |
| BP Heartbeat | https://github.com/bancorprotocol/eos-producer-heartbeat-plugin |
| ElasticSearch | https://github.com/EOSLaoMao/elasticsearch_plugin |
| Kafka | https://github.com/TP-Lab/kafka_plugin |
| MySQL | https://github.com/eosBLACK/eosio_mysqldb_plugin |
...
...
programs/eosio-blocklog/main.cpp
浏览文件 @
c9332860
...
...
@@ -35,6 +35,7 @@ struct blocklog {
uint32_t
first_block
;
uint32_t
last_block
;
bool
no_pretty_print
;
bool
as_json_array
;
};
void
blocklog
::
read_log
()
{
...
...
@@ -82,6 +83,8 @@ void blocklog::read_log() {
else
out
=
&
std
::
cout
;
if
(
as_json_array
)
*
out
<<
"["
;
uint32_t
block_num
=
(
first_block
<
1
)
?
1
:
first_block
;
signed_block_ptr
next
;
fc
::
variant
pretty_output
;
...
...
@@ -104,20 +107,27 @@ void blocklog::read_log() {
else
*
out
<<
fc
::
json
::
to_pretty_string
(
v
)
<<
"
\n
"
;
};
bool
contains_obj
=
false
;
while
((
block_num
<=
last_block
)
&&
(
next
=
block_logger
.
read_block_by_num
(
block_num
)))
{
if
(
as_json_array
&&
contains_obj
)
*
out
<<
","
;
print_block
(
next
);
++
block_num
;
out
->
flush
();
}
if
(
!
reversible_blocks
)
{
return
;
contains_obj
=
true
;
}
if
(
reversible_blocks
)
{
const
reversible_block_object
*
obj
=
nullptr
;
while
(
(
block_num
<=
last_block
)
&&
(
obj
=
reversible_blocks
->
find
<
reversible_block_object
,
by_num
>
(
block_num
))
)
{
if
(
as_json_array
&&
contains_obj
)
*
out
<<
","
;
auto
next
=
obj
->
get_block
();
print_block
(
next
);
++
block_num
;
contains_obj
=
true
;
}
}
if
(
as_json_array
)
*
out
<<
"]"
;
}
void
blocklog
::
set_program_options
(
options_description
&
cli
)
...
...
@@ -133,6 +143,8 @@ void blocklog::set_program_options(options_description& cli)
"the last block number (inclusive) to log"
)
(
"no-pretty-print"
,
bpo
::
bool_switch
(
&
no_pretty_print
)
->
default_value
(
false
),
"Do not pretty print the output. Useful if piping to jq to improve performance."
)
(
"as-json-array"
,
bpo
::
bool_switch
(
&
as_json_array
)
->
default_value
(
false
),
"Print out json blocks wrapped in json array (otherwise the output is free-standing json objects)."
)
(
"help"
,
"Print this help message and exit."
)
;
...
...
tests/CMakeLists.txt
浏览文件 @
c9332860
...
...
@@ -83,8 +83,6 @@ add_test(NAME bnet_nodeos_sanity_lr_test COMMAND tests/nodeos_run_test.py -v --s
set_property
(
TEST bnet_nodeos_sanity_lr_test PROPERTY LABELS long_running_tests
)
add_test
(
NAME nodeos_run_check_lr_test COMMAND tests/nodeos_run_test.py -v --clean-run --dump-error-detail WORKING_DIRECTORY
${
CMAKE_BINARY_DIR
}
)
set_property
(
TEST nodeos_run_check_lr_test PROPERTY LABELS long_running_tests
)
add_test
(
NAME nodeos_run_check2_lr_test COMMAND tests/nodeos_run_test.py -v --wallet-port 9900 --clean-run --dump-error-detail WORKING_DIRECTORY
${
CMAKE_BINARY_DIR
}
)
set_property
(
TEST nodeos_run_check2_lr_test PROPERTY LABELS long_running_tests
)
#add_test(NAME distributed_transactions_lr_test COMMAND tests/distributed-transactions-test.py -d 2 -p 21 -n 21 -v --clean-run --dump-error-detail WORKING_DIRECTORY ${CMAKE_BINARY_DIR})
#set_property(TEST distributed_transactions_lr_test PROPERTY LABELS long_running_tests)
...
...
tests/Cluster.py
浏览文件 @
c9332860
...
...
@@ -30,6 +30,9 @@ class Cluster(object):
__BiosPort
=
8788
__LauncherCmdArr
=
[]
__bootlog
=
"eosio-ignition-wd/bootlog.txt"
__configDir
=
"etc/eosio/"
__dataDir
=
"var/lib/"
__fileDivider
=
"================================================================="
# pylint: disable=too-many-arguments
# walletd [True|False] Is keosd running. If not load the wallet plugin
...
...
@@ -745,6 +748,14 @@ class Cluster(object):
m
=
re
.
search
(
r
"node_([\d]+)"
,
name
)
return
int
(
m
.
group
(
1
))
@
staticmethod
def
nodeExtensionToName
(
ext
):
r
"""Convert node extension (bios, 0, 1, etc) to node name. """
prefix
=
"node_"
if
ext
==
"bios"
:
return
prefix
+
ext
return
"node_%02d"
%
(
ext
)
@
staticmethod
def
parseProducerKeys
(
configFile
,
nodeName
):
...
...
@@ -783,8 +794,7 @@ class Cluster(object):
def
parseProducers
(
nodeNum
):
"""Parse node config file for producers."""
node
=
"node_%02d"
%
(
nodeNum
)
configFile
=
"etc/eosio/%s/config.ini"
%
(
node
)
configFile
=
Cluster
.
__configDir
+
Cluster
.
nodeExtensionToName
(
nodeNum
)
+
"/config.ini"
if
Utils
.
Debug
:
Utils
.
Print
(
"Parsing config file %s"
%
configFile
)
configStr
=
None
with
open
(
configFile
,
'r'
)
as
f
:
...
...
@@ -802,20 +812,20 @@ class Cluster(object):
def
parseClusterKeys
(
totalNodes
):
"""Parse cluster config file. Updates producer keys data members."""
node
=
"node_bios"
configFile
=
"etc/eosio/%s/config.ini"
%
(
node
)
node
Name
=
Cluster
.
nodeExtensionToName
(
"bios"
)
configFile
=
Cluster
.
__configDir
+
nodeName
+
"/config.ini"
if
Utils
.
Debug
:
Utils
.
Print
(
"Parsing config file %s"
%
configFile
)
producerKeys
=
Cluster
.
parseProducerKeys
(
configFile
,
node
)
producerKeys
=
Cluster
.
parseProducerKeys
(
configFile
,
node
Name
)
if
producerKeys
is
None
:
Utils
.
Print
(
"ERROR: Failed to parse eosio private keys from cluster config files."
)
return
None
for
i
in
range
(
0
,
totalNodes
):
node
=
"node_%02d"
%
(
i
)
configFile
=
"etc/eosio/%s/config.ini"
%
(
node
)
node
Name
=
Cluster
.
nodeExtensionToName
(
i
)
configFile
=
Cluster
.
__configDir
+
nodeName
+
"/config.ini"
if
Utils
.
Debug
:
Utils
.
Print
(
"Parsing config file %s"
%
configFile
)
keys
=
Cluster
.
parseProducerKeys
(
configFile
,
node
)
keys
=
Cluster
.
parseProducerKeys
(
configFile
,
node
Name
)
if
keys
is
not
None
:
producerKeys
.
update
(
keys
)
keyMsg
=
"None"
if
keys
is
None
else
len
(
keys
)
...
...
@@ -1183,11 +1193,8 @@ class Cluster(object):
@
staticmethod
def
pgrepEosServerPattern
(
nodeInstance
):
if
isinstance
(
nodeInstance
,
str
):
return
r
"[\n]?(\d+) (.* --data-dir var/lib/node_%s .*)\n"
%
nodeInstance
else
:
nodeInstanceStr
=
"%02d"
%
nodeInstance
return
Cluster
.
pgrepEosServerPattern
(
nodeInstanceStr
)
dataLocation
=
Cluster
.
__dataDir
+
Cluster
.
nodeExtensionToName
(
nodeInstance
)
return
r
"[\n]?(\d+) (.* --data-dir %s .*)\n"
%
(
dataLocation
)
# Populates list of EosInstanceInfo objects, matched to actual running instances
def
discoverLocalNodes
(
self
,
totalNodes
,
timeout
=
None
):
...
...
@@ -1259,7 +1266,7 @@ class Cluster(object):
@
staticmethod
def
dumpErrorDetailImpl
(
fileName
):
Utils
.
Print
(
"================================================================="
)
Utils
.
Print
(
Cluster
.
__fileDivider
)
Utils
.
Print
(
"Contents of %s:"
%
(
fileName
))
if
os
.
path
.
exists
(
fileName
):
with
open
(
fileName
,
"r"
)
as
f
:
...
...
@@ -1268,17 +1275,18 @@ class Cluster(object):
Utils
.
Print
(
"File %s not found."
%
(
fileName
))
def
dumpErrorDetails
(
self
):
fileName
=
"etc/eosio/node_bios
/config.ini"
fileName
=
Cluster
.
__configDir
+
Cluster
.
nodeExtensionToName
(
"bios"
)
+
"
/config.ini"
Cluster
.
dumpErrorDetailImpl
(
fileName
)
fileName
=
"var/lib/node_bios
/stderr.txt"
fileName
=
Cluster
.
__dataDir
+
Cluster
.
nodeExtensionToName
(
"bios"
)
+
"
/stderr.txt"
Cluster
.
dumpErrorDetailImpl
(
fileName
)
for
i
in
range
(
0
,
len
(
self
.
nodes
)):
fileName
=
"etc/eosio/node_%02d/config.ini"
%
(
i
)
configLocation
=
Cluster
.
__configDir
+
Cluster
.
nodeExtensionToName
(
i
)
+
"/"
fileName
=
configLocation
+
"config.ini"
Cluster
.
dumpErrorDetailImpl
(
fileName
)
fileName
=
"etc/eosio/node_%02d/genesis.json"
%
(
i
)
fileName
=
configLocation
+
"genesis.json"
Cluster
.
dumpErrorDetailImpl
(
fileName
)
fileName
=
"var/lib/node_%02d/stderr.txt"
%
(
i
)
fileName
=
Cluster
.
__dataDir
+
Cluster
.
nodeExtensionToName
(
i
)
+
"/stderr.txt"
Cluster
.
dumpErrorDetailImpl
(
fileName
)
if
self
.
useBiosBootFile
:
...
...
@@ -1350,9 +1358,9 @@ class Cluster(object):
return
node
.
waitForNextBlock
(
timeout
)
def
cleanup
(
self
):
for
f
in
glob
.
glob
(
"var/lib/
node_*"
):
for
f
in
glob
.
glob
(
Cluster
.
__dataDir
+
"
node_*"
):
shutil
.
rmtree
(
f
)
for
f
in
glob
.
glob
(
"etc/eosio/
node_*"
):
for
f
in
glob
.
glob
(
Cluster
.
__configDir
+
"
node_*"
):
shutil
.
rmtree
(
f
)
for
f
in
self
.
filesToCleanup
:
...
...
@@ -1407,3 +1415,38 @@ class Cluster(object):
node
.
reportStatus
()
except
:
Utils
.
Print
(
"No reportStatus"
)
def
printBlockLogIfNeeded
(
self
):
printBlockLog
=
False
if
hasattr
(
self
,
"nodes"
):
for
node
in
self
.
nodes
:
if
node
.
missingTransaction
:
printBlockLog
=
True
break
if
hasattr
(
self
,
"biosNode"
)
and
self
.
biosNode
.
missingTransaction
:
printBlockLog
=
True
if
not
printBlockLog
:
return
self
.
printBlockLog
()
def
printBlockLog
(
self
):
blockLogDir
=
Cluster
.
__dataDir
+
Cluster
.
nodeExtensionToName
(
"bios"
)
+
"/blocks/"
blockLogBios
=
Utils
.
getBlockLog
(
blockLogDir
,
exitOnError
=
False
)
Utils
.
Print
(
Cluster
.
__fileDivider
)
Utils
.
Print
(
"Block log from %s:
\n
%s"
%
(
blockLogDir
,
json
.
dumps
(
blockLogBios
,
indent
=
1
)))
if
not
hasattr
(
self
,
"nodes"
):
return
numNodes
=
len
(
self
.
nodes
)
for
i
in
range
(
numNodes
):
node
=
self
.
nodes
[
i
]
blockLogDir
=
Cluster
.
__dataDir
+
Cluster
.
nodeExtensionToName
(
i
)
+
"/blocks/"
blockLog
=
Utils
.
getBlockLog
(
blockLogDir
,
exitOnError
=
False
)
Utils
.
Print
(
Cluster
.
__fileDivider
)
Utils
.
Print
(
"Block log from %s:
\n
%s"
%
(
blockLogDir
,
json
.
dumps
(
blockLog
,
indent
=
1
)))
tests/Node.py
浏览文件 @
c9332860
...
...
@@ -50,6 +50,7 @@ class Node(object):
self
.
lastRetrievedLIB
=
None
self
.
transCache
=
{}
self
.
walletMgr
=
walletMgr
self
.
missingTransaction
=
False
if
self
.
enableMongo
:
self
.
mongoEndpointArgs
+=
"--host %s --port %d %s"
%
(
mongoHost
,
mongoPort
,
mongoDb
)
...
...
@@ -188,7 +189,7 @@ class Node(object):
outStr
=
Node
.
byteArrToStr
(
outs
)
if
not
outStr
:
return
None
extJStr
=
Utils
.
filterJsonObject
(
outStr
)
extJStr
=
Utils
.
filterJsonObject
OrArray
(
outStr
)
if
not
extJStr
:
return
None
jStr
=
Node
.
normalizeJsonObject
(
extJStr
)
...
...
@@ -324,60 +325,11 @@ class Node(object):
"""Is blockNum finalized"""
return
self
.
isBlockPresent
(
blockNum
,
blockType
=
BlockType
.
lib
)
class
BlockWalker
:
def
__init__
(
self
,
node
,
transId
,
startBlockNum
=
None
,
endBlockNum
=
None
):
assert
(
isinstance
(
transId
,
str
))
self
.
trans
=
None
self
.
transId
=
transId
self
.
node
=
node
self
.
startBlockNum
=
startBlockNum
self
.
endBlockNum
=
endBlockNum
def
walkBlocks
(
self
):
start
=
None
end
=
None
if
self
.
trans
is
None
and
self
.
transId
in
self
.
transCache
.
keys
():
self
.
trans
=
self
.
transCache
[
self
.
transId
]
if
self
.
trans
is
not
None
:
cntxt
=
Node
.
Context
(
self
.
trans
,
"trans"
)
cntxt
.
add
(
"processed"
)
cntxt
.
add
(
"action_traces"
)
cntxt
.
index
(
0
)
blockNum
=
cntxt
.
add
(
"block_num"
)
else
:
blockNum
=
None
# it should be blockNum or later, but just in case the block leading up have any clues...
start
=
None
if
self
.
startBlockNum
is
not
None
:
start
=
self
.
startBlockNum
elif
blockNum
is
not
None
:
start
=
blockNum
-
5
if
self
.
endBlockNum
is
not
None
:
end
=
self
.
endBlockNum
else
:
info
=
self
.
node
.
getInfo
()
end
=
info
[
"head_block_num"
]
if
start
is
None
:
if
end
>
100
:
start
=
end
-
100
else
:
start
=
0
transDesc
=
" id =%s"
%
(
self
.
transId
)
if
self
.
trans
is
not
None
:
transDesc
=
"=%s"
%
(
json
.
dumps
(
self
.
trans
,
indent
=
2
,
sort_keys
=
True
))
msg
=
"Original transaction%s
\n
Expected block_num=%s
\n
"
%
(
transDesc
,
blockNum
)
for
blockNum
in
range
(
start
,
end
+
1
):
block
=
self
.
node
.
getBlock
(
blockNum
)
msg
+=
json
.
dumps
(
block
,
indent
=
2
,
sort_keys
=
True
)
+
"
\n
"
return
msg
# pylint: disable=too-many-branches
def
getTransaction
(
self
,
transId
,
silentErrors
=
False
,
exitOnError
=
False
,
delayedRetry
=
True
):
assert
(
isinstance
(
transId
,
str
))
exitOnErrorForDelayed
=
not
delayedRetry
and
exitOnError
timeout
=
3
blockWalker
=
None
if
not
self
.
enableMongo
:
cmdDesc
=
"get transaction"
cmd
=
"%s %s"
%
(
cmdDesc
,
transId
)
...
...
@@ -386,12 +338,10 @@ class Node(object):
trans
=
self
.
processCleosCmd
(
cmd
,
cmdDesc
,
silentErrors
=
silentErrors
,
exitOnError
=
exitOnErrorForDelayed
,
exitMsg
=
msg
)
if
trans
is
not
None
or
not
delayedRetry
:
return
trans
if
blockWalker
is
None
:
blockWalker
=
Node
.
BlockWalker
(
self
,
transId
)
if
Utils
.
Debug
:
Utils
.
Print
(
"Could not find transaction with id %s, delay and retry"
%
(
transId
))
time
.
sleep
(
timeout
)
msg
+=
"
\n
Block printout -->>
\n
%s"
%
blockWalker
.
walkBlocks
();
self
.
missingTransaction
=
True
# either it is there or the transaction has timed out
return
self
.
processCleosCmd
(
cmd
,
cmdDesc
,
silentErrors
=
silentErrors
,
exitOnError
=
exitOnError
,
exitMsg
=
msg
)
else
:
...
...
tests/TestHelper.py
浏览文件 @
c9332860
...
...
@@ -148,6 +148,7 @@ class TestHelper(object):
cluster
.
dumpErrorDetails
()
if
walletMgr
:
walletMgr
.
dumpErrorDetails
()
cluster
.
printBlockLogIfNeeded
()
Utils
.
Print
(
"== Errors see above =="
)
if
len
(
Utils
.
CheckOutputDeque
)
>
0
:
Utils
.
Print
(
"== cout/cerr pairs from last %d calls to Utils. =="
%
len
(
Utils
.
CheckOutputDeque
))
...
...
tests/distributed-transactions-remote-test.py
浏览文件 @
c9332860
...
...
@@ -47,7 +47,7 @@ clusterMapJsonTemplate="""{
}
"""
cluster
=
Cluster
()
cluster
=
Cluster
(
walletd
=
True
)
(
fd
,
nodesFile
)
=
tempfile
.
mkstemp
()
try
:
...
...
@@ -58,7 +58,7 @@ try:
Print
(
"producing nodes: %s, non-producing nodes: %d, topology: %s, delay between nodes launch(seconds): %d"
%
(
pnodes
,
total_nodes
-
pnodes
,
topo
,
delay
))
Print
(
"Stand up cluster"
)
if
cluster
.
launch
(
pnodes
,
total_nodes
,
prodCount
,
topo
,
delay
)
is
False
:
if
cluster
.
launch
(
pnodes
=
pnodes
,
totalNodes
=
total_nodes
,
prodCount
=
prodCount
,
topo
=
topo
,
delay
=
delay
,
dontKill
=
dontKill
)
is
False
:
errorExit
(
"Failed to stand up eos cluster."
)
Print
(
"Wait for Cluster stabilization"
)
...
...
@@ -76,7 +76,7 @@ try:
tfile
.
write
(
clusterMapJson
)
tfile
.
close
()
cmd
=
"%s --nodes-file %s %s %s"
%
(
actualTest
,
nodesFile
,
"-v"
if
debug
else
""
,
"--
dont-kill
"
if
dontKill
else
""
)
cmd
=
"%s --nodes-file %s %s %s"
%
(
actualTest
,
nodesFile
,
"-v"
if
debug
else
""
,
"--
leave-running
"
if
dontKill
else
""
)
Print
(
"Starting up distributed transactions test: %s"
%
(
actualTest
))
Print
(
"cmd: %s
\n
"
%
(
cmd
))
if
0
!=
subprocess
.
call
(
cmd
,
shell
=
True
):
...
...
tests/distributed-transactions-test.py
浏览文件 @
c9332860
...
...
@@ -19,6 +19,7 @@ delay=args.d
total_nodes
=
pnodes
if
args
.
n
==
0
else
args
.
n
debug
=
args
.
v
nodesFile
=
args
.
nodes_file
dontLaunch
=
nodesFile
is
not
None
seed
=
args
.
seed
dontKill
=
args
.
leave_running
dumpErrorDetails
=
args
.
dump_error_details
...
...
@@ -40,7 +41,7 @@ walletMgr=WalletMgr(True)
try
:
cluster
.
setWalletMgr
(
walletMgr
)
if
nodesFile
is
not
None
:
if
dontLaunch
:
# run test against remote cluster
jsonStr
=
None
with
open
(
nodesFile
,
"r"
)
as
f
:
jsonStr
=
f
.
read
()
...
...
@@ -72,7 +73,10 @@ try:
accountsCount
=
total_nodes
walletName
=
"MyWallet-%d"
%
(
random
.
randrange
(
10000
))
Print
(
"Creating wallet %s if one doesn't already exist."
%
walletName
)
wallet
=
walletMgr
.
create
(
walletName
,
[
cluster
.
eosioAccount
,
cluster
.
defproduceraAccount
,
cluster
.
defproducerbAccount
])
walletAccounts
=
[
cluster
.
defproduceraAccount
,
cluster
.
defproducerbAccount
]
if
not
dontLaunch
:
walletAccounts
.
append
(
cluster
.
eosioAccount
)
wallet
=
walletMgr
.
create
(
walletName
,
walletAccounts
)
if
wallet
is
None
:
errorExit
(
"Failed to create wallet %s"
%
(
walletName
))
...
...
tests/nodeos_run_remote_test.py
浏览文件 @
c9332860
...
...
@@ -33,7 +33,7 @@ total_nodes=pnodes
actualTest
=
"tests/nodeos_run_test.py"
testSuccessful
=
False
cluster
=
Cluster
()
cluster
=
Cluster
(
walletd
=
True
)
try
:
Print
(
"BEGIN"
)
cluster
.
killall
(
allInstances
=
killAll
)
...
...
@@ -42,7 +42,8 @@ try:
Print
(
"producing nodes: %s, non-producing nodes: %d, topology: %s, delay between nodes launch(seconds): %d"
%
(
pnodes
,
total_nodes
-
pnodes
,
topo
,
delay
))
Print
(
"Stand up cluster"
)
if
cluster
.
launch
(
pnodes
,
total_nodes
,
prodCount
,
topo
,
delay
,
onlyBios
=
onlyBios
)
is
False
:
if
cluster
.
launch
(
pnodes
=
pnodes
,
totalNodes
=
total_nodes
,
prodCount
=
prodCount
,
topo
=
topo
,
delay
=
delay
,
onlyBios
=
onlyBios
,
dontKill
=
dontKill
)
is
False
:
errorExit
(
"Failed to stand up eos cluster."
)
Print
(
"Wait for Cluster stabilization"
)
...
...
@@ -54,7 +55,7 @@ try:
defproduceraPrvtKey
=
producerKeys
[
"defproducera"
][
"private"
]
defproducerbPrvtKey
=
producerKeys
[
"defproducerb"
][
"private"
]
cmd
=
"%s --dont-launch --defproducera_prvt_key %s --defproducerb_prvt_key %s %s %s %s"
%
(
actualTest
,
defproduceraPrvtKey
,
defproducerbPrvtKey
,
"-v"
if
debug
else
""
,
"--
dont-kill
"
if
dontKill
else
""
,
"--only-bios"
if
onlyBios
else
""
)
cmd
=
"%s --dont-launch --defproducera_prvt_key %s --defproducerb_prvt_key %s %s %s %s"
%
(
actualTest
,
defproduceraPrvtKey
,
defproducerbPrvtKey
,
"-v"
if
debug
else
""
,
"--
leave-running
"
if
dontKill
else
""
,
"--only-bios"
if
onlyBios
else
""
)
Print
(
"Starting up %s test: %s"
%
(
"nodeos"
,
actualTest
))
Print
(
"cmd: %s
\n
"
%
(
cmd
))
if
0
!=
subprocess
.
call
(
cmd
,
shell
=
True
):
...
...
tests/nodeos_run_test.py
浏览文件 @
c9332860
...
...
@@ -71,6 +71,7 @@ try:
cmdError
(
"launcher"
)
errorExit
(
"Failed to stand up eos cluster."
)
else
:
Print
(
"Collecting cluster info."
)
cluster
.
initializeNodes
(
defproduceraPrvtKey
=
defproduceraPrvtKey
,
defproducerbPrvtKey
=
defproducerbPrvtKey
)
killEosInstances
=
False
Print
(
"Stand up %s"
%
(
WalletdName
))
...
...
@@ -113,7 +114,10 @@ try:
testWalletName
=
"test"
Print
(
"Creating wallet
\"
%s
\"
."
%
(
testWalletName
))
testWallet
=
walletMgr
.
create
(
testWalletName
,
[
cluster
.
eosioAccount
,
cluster
.
defproduceraAccount
,
cluster
.
defproducerbAccount
])
walletAccounts
=
[
cluster
.
defproduceraAccount
,
cluster
.
defproducerbAccount
]
if
not
dontLaunch
:
walletAccounts
.
append
(
cluster
.
eosioAccount
)
testWallet
=
walletMgr
.
create
(
testWalletName
,
walletAccounts
)
Print
(
"Wallet
\"
%s
\"
password=%s."
%
(
testWalletName
,
testWallet
.
password
.
encode
(
"utf-8"
)))
...
...
@@ -200,15 +204,14 @@ try:
Print
(
"Validating accounts before user accounts creation"
)
cluster
.
validateAccounts
(
None
)
# create accounts via eosio as otherwise a bid is needed
Print
(
"Create new account %s via %s"
%
(
testeraAccount
.
name
,
cluster
.
eosioAccount
.
name
))
transId
=
node
.
createInitializeAccount
(
testeraAccount
,
cluster
.
eosioAccount
,
stakedDeposit
=
0
,
waitForTransBlock
=
False
,
exitOnError
=
True
)
Print
(
"Create new account %s via %s"
%
(
testeraAccount
.
name
,
cluster
.
defproduceraAccount
.
name
))
transId
=
node
.
createInitializeAccount
(
testeraAccount
,
cluster
.
defproduceraAccount
,
stakedDeposit
=
0
,
waitForTransBlock
=
False
,
exitOnError
=
True
)
Print
(
"Create new account %s via %s"
%
(
currencyAccount
.
name
,
cluster
.
eosio
Account
.
name
))
transId
=
node
.
createInitializeAccount
(
currencyAccount
,
cluster
.
eosioAccount
,
buyRAM
=
10
00000
,
stakedDeposit
=
5000
,
exitOnError
=
True
)
Print
(
"Create new account %s via %s"
%
(
currencyAccount
.
name
,
cluster
.
defproducera
Account
.
name
))
transId
=
node
.
createInitializeAccount
(
currencyAccount
,
cluster
.
defproduceraAccount
,
buyRAM
=
2
00000
,
stakedDeposit
=
5000
,
exitOnError
=
True
)
Print
(
"Create new account %s via %s"
%
(
exchangeAccount
.
name
,
cluster
.
eosio
Account
.
name
))
transId
=
node
.
createInitializeAccount
(
exchangeAccount
,
cluster
.
eosioAccount
,
buyRAM
=
10
00000
,
waitForTransBlock
=
True
,
exitOnError
=
True
)
Print
(
"Create new account %s via %s"
%
(
exchangeAccount
.
name
,
cluster
.
defproducera
Account
.
name
))
transId
=
node
.
createInitializeAccount
(
exchangeAccount
,
cluster
.
defproduceraAccount
,
buyRAM
=
2
00000
,
waitForTransBlock
=
True
,
exitOnError
=
True
)
Print
(
"Validating accounts after user accounts creation"
)
accounts
=
[
testeraAccount
,
currencyAccount
,
exchangeAccount
]
...
...
tests/nodeos_under_min_avail_ram.py
浏览文件 @
c9332860
...
...
@@ -48,6 +48,7 @@ class NamedAccounts:
Print
(
"NamedAccounts Name for %d is %s"
%
(
temp
,
retStr
))
return
retStr
###############################################################
# nodeos_voting_test
# --dump-error-details <Upon error print etc/eosio/node_*/config.ini and var/lib/node_*/stderr.log to stdout>
...
...
@@ -151,6 +152,7 @@ try:
count
=
0
while
keepProcessing
:
numAmount
+=
1
timeOutCount
=
0
for
fromIndex
in
range
(
namedAccounts
.
numAccounts
):
count
+=
1
toIndex
=
fromIndex
+
1
...
...
@@ -163,8 +165,15 @@ try:
try
:
trans
=
nodes
[
0
].
pushMessage
(
contract
,
action
,
data
,
opts
)
if
trans
is
None
or
not
trans
[
0
]:
timeOutCount
+=
1
if
timeOutCount
>=
3
:
Print
(
"Failed to push create action to eosio contract for %d consecutive times, looks like nodeos already exited."
%
(
timeOutCount
))
keepProcessing
=
False
break
Print
(
"Failed to push create action to eosio contract. sleep for 60 seconds"
)
time
.
sleep
(
60
)
else
:
timeOutCount
=
0
time
.
sleep
(
1
)
except
TypeError
as
ex
:
keepProcessing
=
False
...
...
tests/testUtils.py
浏览文件 @
c9332860
...
...
@@ -32,6 +32,8 @@ class Utils:
ShuttingDown
=
False
CheckOutputDeque
=
deque
(
maxlen
=
10
)
EosBlockLogPath
=
"programs/eosio-blocklog/eosio-blocklog"
@
staticmethod
def
Print
(
*
args
,
**
kwargs
):
stackDepth
=
len
(
inspect
.
stack
())
-
2
...
...
@@ -136,16 +138,25 @@ class Utils:
return
False
if
ret
is
None
else
ret
@
staticmethod
def
filterJsonObject
(
data
):
firstIdx
=
data
.
find
(
'{'
)
lastIdx
=
data
.
rfind
(
'}'
)
retStr
=
data
[
firstIdx
:
lastIdx
+
1
]
def
filterJsonObjectOrArray
(
data
):
firstObjIdx
=
data
.
find
(
'{'
)
lastObjIdx
=
data
.
rfind
(
'}'
)
firstArrayIdx
=
data
.
find
(
'['
)
lastArrayIdx
=
data
.
rfind
(
']'
)
if
firstArrayIdx
==-
1
or
lastArrayIdx
==-
1
:
retStr
=
data
[
firstObjIdx
:
lastObjIdx
+
1
]
elif
firstObjIdx
==-
1
or
lastObjIdx
==-
1
:
retStr
=
data
[
firstArrayIdx
:
lastArrayIdx
+
1
]
elif
lastArrayIdx
<
lastObjIdx
:
retStr
=
data
[
firstObjIdx
:
lastObjIdx
+
1
]
else
:
retStr
=
data
[
firstArrayIdx
:
lastArrayIdx
+
1
]
return
retStr
@
staticmethod
def
runCmdArrReturnJson
(
cmdArr
,
trace
=
False
,
silentErrors
=
True
):
retStr
=
Utils
.
checkOutput
(
cmdArr
)
jStr
=
Utils
.
filterJsonObject
(
retStr
)
jStr
=
Utils
.
filterJsonObject
OrArray
(
retStr
)
if
trace
:
Utils
.
Print
(
"RAW > %s"
%
(
retStr
))
if
trace
:
Utils
.
Print
(
"JSON> %s"
%
(
jStr
))
if
not
jStr
:
...
...
@@ -213,6 +224,30 @@ class Utils:
return
"pgrep %s %s"
%
(
pgrepOpts
,
serverName
)
@
staticmethod
def
getBlockLog
(
blockLogLocation
,
silentErrors
=
False
,
exitOnError
=
False
):
assert
(
isinstance
(
blockLogLocation
,
str
))
cmd
=
"%s --blocks-dir %s --as-json-array"
%
(
Utils
.
EosBlockLogPath
,
blockLogLocation
)
if
Utils
.
Debug
:
Utils
.
Print
(
"cmd: %s"
%
(
cmd
))
rtn
=
None
try
:
rtn
=
Utils
.
runCmdReturnJson
(
cmd
,
silentErrors
=
silentErrors
)
except
subprocess
.
CalledProcessError
as
ex
:
if
not
silentErrors
:
msg
=
ex
.
output
.
decode
(
"utf-8"
)
errorMsg
=
"Exception during
\"
%s
\"
. %s"
%
(
cmd
,
msg
)
if
exitOnError
:
Utils
.
cmdError
(
errorMsg
)
Utils
.
errorExit
(
errorMsg
)
else
:
Utils
.
Print
(
"ERROR: %s"
%
(
errorMsg
))
return
None
if
exitOnError
and
rtn
is
None
:
Utils
.
cmdError
(
"could not
\"
%s
\"
"
%
(
cmd
))
Utils
.
errorExit
(
"Failed to
\"
%s
\"
"
%
(
cmd
))
return
rtn
###########################################################################################
class
Account
(
object
):
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录