Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openanolis
dragonwell8_hotspot
提交
ea1f4051
D
dragonwell8_hotspot
项目概览
openanolis
/
dragonwell8_hotspot
通知
2
Star
2
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
dragonwell8_hotspot
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
ea1f4051
编写于
4月 29, 2011
作者:
I
iveresov
浏览文件
操作
浏览文件
下载
差异文件
Merge
上级
b1575360
086cc9b4
变更
11
隐藏空白更改
内联
并排
Showing
11 changed file
with
243 addition
and
117 deletion
+243
-117
src/share/vm/gc_implementation/g1/concurrentMark.cpp
src/share/vm/gc_implementation/g1/concurrentMark.cpp
+88
-21
src/share/vm/gc_implementation/g1/concurrentMark.hpp
src/share/vm/gc_implementation/g1/concurrentMark.hpp
+33
-1
src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp
src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp
+28
-19
src/share/vm/gc_implementation/g1/g1CollectedHeap.hpp
src/share/vm/gc_implementation/g1/g1CollectedHeap.hpp
+2
-0
src/share/vm/gc_implementation/g1/g1RemSet.cpp
src/share/vm/gc_implementation/g1/g1RemSet.cpp
+20
-32
src/share/vm/gc_implementation/g1/g1_globals.hpp
src/share/vm/gc_implementation/g1/g1_globals.hpp
+5
-1
src/share/vm/gc_implementation/g1/heapRegion.cpp
src/share/vm/gc_implementation/g1/heapRegion.cpp
+29
-1
src/share/vm/gc_implementation/g1/heapRegion.hpp
src/share/vm/gc_implementation/g1/heapRegion.hpp
+8
-3
src/share/vm/memory/cardTableModRefBS.cpp
src/share/vm/memory/cardTableModRefBS.cpp
+26
-32
src/share/vm/memory/cardTableModRefBS.hpp
src/share/vm/memory/cardTableModRefBS.hpp
+4
-1
src/share/vm/memory/modRefBarrierSet.hpp
src/share/vm/memory/modRefBarrierSet.hpp
+0
-6
未找到文件。
src/share/vm/gc_implementation/g1/concurrentMark.cpp
浏览文件 @
ea1f4051
...
@@ -826,6 +826,14 @@ public:
...
@@ -826,6 +826,14 @@ public:
void
ConcurrentMark
::
checkpointRootsInitialPost
()
{
void
ConcurrentMark
::
checkpointRootsInitialPost
()
{
G1CollectedHeap
*
g1h
=
G1CollectedHeap
::
heap
();
G1CollectedHeap
*
g1h
=
G1CollectedHeap
::
heap
();
// If we force an overflow during remark, the remark operation will
// actually abort and we'll restart concurrent marking. If we always
// force an oveflow during remark we'll never actually complete the
// marking phase. So, we initilize this here, at the start of the
// cycle, so that at the remaining overflow number will decrease at
// every remark and we'll eventually not need to cause one.
force_overflow_stw
()
->
init
();
// For each region note start of marking.
// For each region note start of marking.
NoteStartOfMarkHRClosure
startcl
;
NoteStartOfMarkHRClosure
startcl
;
g1h
->
heap_region_iterate
(
&
startcl
);
g1h
->
heap_region_iterate
(
&
startcl
);
...
@@ -893,27 +901,37 @@ void ConcurrentMark::checkpointRootsInitial() {
...
@@ -893,27 +901,37 @@ void ConcurrentMark::checkpointRootsInitial() {
}
}
/*
/*
Notice that in the next two methods, we actually leave the STS
* Notice that in the next two methods, we actually leave the STS
during the barrier sync and join it immediately afterwards. If we
* during the barrier sync and join it immediately afterwards. If we
do not do this, this then the following deadlock can occur: one
* do not do this, the following deadlock can occur: one thread could
thread could be in the barrier sync code, waiting for the other
* be in the barrier sync code, waiting for the other thread to also
thread to also sync up, whereas another one could be trying to
* sync up, whereas another one could be trying to yield, while also
yield, while also waiting for the other threads to sync up too.
* waiting for the other threads to sync up too.
*
Because the thread that does the sync barrier has left the STS, it
* Note, however, that this code is also used during remark and in
is possible to be suspended for a Full GC or an evacuation pause
* this case we should not attempt to leave / enter the STS, otherwise
could occur. This is actually safe, since the entering the sync
* we'll either hit an asseert (debug / fastdebug) or deadlock
barrier is one of the last things do_marking_step() does, and it
* (product). So we should only leave / enter the STS if we are
doesn't manipulate any data structures afterwards.
* operating concurrently.
*/
*
* Because the thread that does the sync barrier has left the STS, it
* is possible to be suspended for a Full GC or an evacuation pause
* could occur. This is actually safe, since the entering the sync
* barrier is one of the last things do_marking_step() does, and it
* doesn't manipulate any data structures afterwards.
*/
void
ConcurrentMark
::
enter_first_sync_barrier
(
int
task_num
)
{
void
ConcurrentMark
::
enter_first_sync_barrier
(
int
task_num
)
{
if
(
verbose_low
())
if
(
verbose_low
())
gclog_or_tty
->
print_cr
(
"[%d] entering first barrier"
,
task_num
);
gclog_or_tty
->
print_cr
(
"[%d] entering first barrier"
,
task_num
);
ConcurrentGCThread
::
stsLeave
();
if
(
concurrent
())
{
ConcurrentGCThread
::
stsLeave
();
}
_first_overflow_barrier_sync
.
enter
();
_first_overflow_barrier_sync
.
enter
();
ConcurrentGCThread
::
stsJoin
();
if
(
concurrent
())
{
ConcurrentGCThread
::
stsJoin
();
}
// at this point everyone should have synced up and not be doing any
// at this point everyone should have synced up and not be doing any
// more work
// more work
...
@@ -923,7 +941,12 @@ void ConcurrentMark::enter_first_sync_barrier(int task_num) {
...
@@ -923,7 +941,12 @@ void ConcurrentMark::enter_first_sync_barrier(int task_num) {
// let task 0 do this
// let task 0 do this
if
(
task_num
==
0
)
{
if
(
task_num
==
0
)
{
// task 0 is responsible for clearing the global data structures
// task 0 is responsible for clearing the global data structures
clear_marking_state
();
// We should be here because of an overflow. During STW we should
// not clear the overflow flag since we rely on it being true when
// we exit this method to abort the pause and restart concurent
// marking.
clear_marking_state
(
concurrent
()
/* clear_overflow */
);
force_overflow
()
->
update
();
if
(
PrintGC
)
{
if
(
PrintGC
)
{
gclog_or_tty
->
date_stamp
(
PrintGCDateStamps
);
gclog_or_tty
->
date_stamp
(
PrintGCDateStamps
);
...
@@ -940,15 +963,45 @@ void ConcurrentMark::enter_second_sync_barrier(int task_num) {
...
@@ -940,15 +963,45 @@ void ConcurrentMark::enter_second_sync_barrier(int task_num) {
if
(
verbose_low
())
if
(
verbose_low
())
gclog_or_tty
->
print_cr
(
"[%d] entering second barrier"
,
task_num
);
gclog_or_tty
->
print_cr
(
"[%d] entering second barrier"
,
task_num
);
ConcurrentGCThread
::
stsLeave
();
if
(
concurrent
())
{
ConcurrentGCThread
::
stsLeave
();
}
_second_overflow_barrier_sync
.
enter
();
_second_overflow_barrier_sync
.
enter
();
ConcurrentGCThread
::
stsJoin
();
if
(
concurrent
())
{
ConcurrentGCThread
::
stsJoin
();
}
// at this point everything should be re-initialised and ready to go
// at this point everything should be re-initialised and ready to go
if
(
verbose_low
())
if
(
verbose_low
())
gclog_or_tty
->
print_cr
(
"[%d] leaving second barrier"
,
task_num
);
gclog_or_tty
->
print_cr
(
"[%d] leaving second barrier"
,
task_num
);
}
}
#ifndef PRODUCT
void
ForceOverflowSettings
::
init
()
{
_num_remaining
=
G1ConcMarkForceOverflow
;
_force
=
false
;
update
();
}
void
ForceOverflowSettings
::
update
()
{
if
(
_num_remaining
>
0
)
{
_num_remaining
-=
1
;
_force
=
true
;
}
else
{
_force
=
false
;
}
}
bool
ForceOverflowSettings
::
should_force
()
{
if
(
_force
)
{
_force
=
false
;
return
true
;
}
else
{
return
false
;
}
}
#endif // !PRODUCT
void
ConcurrentMark
::
grayRoot
(
oop
p
)
{
void
ConcurrentMark
::
grayRoot
(
oop
p
)
{
HeapWord
*
addr
=
(
HeapWord
*
)
p
;
HeapWord
*
addr
=
(
HeapWord
*
)
p
;
// We can't really check against _heap_start and _heap_end, since it
// We can't really check against _heap_start and _heap_end, since it
...
@@ -1117,6 +1170,7 @@ void ConcurrentMark::markFromRoots() {
...
@@ -1117,6 +1170,7 @@ void ConcurrentMark::markFromRoots() {
_restart_for_overflow
=
false
;
_restart_for_overflow
=
false
;
size_t
active_workers
=
MAX2
((
size_t
)
1
,
parallel_marking_threads
());
size_t
active_workers
=
MAX2
((
size_t
)
1
,
parallel_marking_threads
());
force_overflow_conc
()
->
init
();
set_phase
(
active_workers
,
true
/* concurrent */
);
set_phase
(
active_workers
,
true
/* concurrent */
);
CMConcurrentMarkingTask
markingTask
(
this
,
cmThread
());
CMConcurrentMarkingTask
markingTask
(
this
,
cmThread
());
...
@@ -1845,7 +1899,7 @@ void ConcurrentMark::completeCleanup() {
...
@@ -1845,7 +1899,7 @@ void ConcurrentMark::completeCleanup() {
while
(
!
_cleanup_list
.
is_empty
())
{
while
(
!
_cleanup_list
.
is_empty
())
{
HeapRegion
*
hr
=
_cleanup_list
.
remove_head
();
HeapRegion
*
hr
=
_cleanup_list
.
remove_head
();
assert
(
hr
!=
NULL
,
"the list was not empty"
);
assert
(
hr
!=
NULL
,
"the list was not empty"
);
hr
->
rem_set
()
->
clear
();
hr
->
par_
clear
();
tmp_free_list
.
add_as_tail
(
hr
);
tmp_free_list
.
add_as_tail
(
hr
);
// Instead of adding one region at a time to the secondary_free_list,
// Instead of adding one region at a time to the secondary_free_list,
...
@@ -2703,12 +2757,16 @@ void ConcurrentMark::oops_do(OopClosure* cl) {
...
@@ -2703,12 +2757,16 @@ void ConcurrentMark::oops_do(OopClosure* cl) {
}
}
void
ConcurrentMark
::
clear_marking_state
()
{
void
ConcurrentMark
::
clear_marking_state
(
bool
clear_overflow
)
{
_markStack
.
setEmpty
();
_markStack
.
setEmpty
();
_markStack
.
clear_overflow
();
_markStack
.
clear_overflow
();
_regionStack
.
setEmpty
();
_regionStack
.
setEmpty
();
_regionStack
.
clear_overflow
();
_regionStack
.
clear_overflow
();
clear_has_overflown
();
if
(
clear_overflow
)
{
clear_has_overflown
();
}
else
{
assert
(
has_overflown
(),
"pre-condition"
);
}
_finger
=
_heap_start
;
_finger
=
_heap_start
;
for
(
int
i
=
0
;
i
<
(
int
)
_max_task_num
;
++
i
)
{
for
(
int
i
=
0
;
i
<
(
int
)
_max_task_num
;
++
i
)
{
...
@@ -4279,6 +4337,15 @@ void CMTask::do_marking_step(double time_target_ms,
...
@@ -4279,6 +4337,15 @@ void CMTask::do_marking_step(double time_target_ms,
}
}
}
}
// If we are about to wrap up and go into termination, check if we
// should raise the overflow flag.
if
(
do_termination
&&
!
has_aborted
())
{
if
(
_cm
->
force_overflow
()
->
should_force
())
{
_cm
->
set_has_overflown
();
regular_clock_call
();
}
}
// We still haven't aborted. Now, let's try to get into the
// We still haven't aborted. Now, let's try to get into the
// termination protocol.
// termination protocol.
if
(
do_termination
&&
!
has_aborted
())
{
if
(
do_termination
&&
!
has_aborted
())
{
...
...
src/share/vm/gc_implementation/g1/concurrentMark.hpp
浏览文件 @
ea1f4051
...
@@ -316,6 +316,19 @@ public:
...
@@ -316,6 +316,19 @@ public:
void
setEmpty
()
{
_index
=
0
;
clear_overflow
();
}
void
setEmpty
()
{
_index
=
0
;
clear_overflow
();
}
};
};
class
ForceOverflowSettings
VALUE_OBJ_CLASS_SPEC
{
private:
#ifndef PRODUCT
uintx
_num_remaining
;
bool
_force
;
#endif // !defined(PRODUCT)
public:
void
init
()
PRODUCT_RETURN
;
void
update
()
PRODUCT_RETURN
;
bool
should_force
()
PRODUCT_RETURN_
(
return
false
;
);
};
// this will enable a variety of different statistics per GC task
// this will enable a variety of different statistics per GC task
#define _MARKING_STATS_ 0
#define _MARKING_STATS_ 0
// this will enable the higher verbose levels
// this will enable the higher verbose levels
...
@@ -462,6 +475,9 @@ protected:
...
@@ -462,6 +475,9 @@ protected:
WorkGang
*
_parallel_workers
;
WorkGang
*
_parallel_workers
;
ForceOverflowSettings
_force_overflow_conc
;
ForceOverflowSettings
_force_overflow_stw
;
void
weakRefsWork
(
bool
clear_all_soft_refs
);
void
weakRefsWork
(
bool
clear_all_soft_refs
);
void
swapMarkBitMaps
();
void
swapMarkBitMaps
();
...
@@ -470,7 +486,7 @@ protected:
...
@@ -470,7 +486,7 @@ protected:
// task local ones; should be called during initial mark.
// task local ones; should be called during initial mark.
void
reset
();
void
reset
();
// It resets all the marking data structures.
// It resets all the marking data structures.
void
clear_marking_state
();
void
clear_marking_state
(
bool
clear_overflow
=
true
);
// It should be called to indicate which phase we're in (concurrent
// It should be called to indicate which phase we're in (concurrent
// mark or remark) and how many threads are currently active.
// mark or remark) and how many threads are currently active.
...
@@ -547,6 +563,22 @@ protected:
...
@@ -547,6 +563,22 @@ protected:
void
enter_first_sync_barrier
(
int
task_num
);
void
enter_first_sync_barrier
(
int
task_num
);
void
enter_second_sync_barrier
(
int
task_num
);
void
enter_second_sync_barrier
(
int
task_num
);
ForceOverflowSettings
*
force_overflow_conc
()
{
return
&
_force_overflow_conc
;
}
ForceOverflowSettings
*
force_overflow_stw
()
{
return
&
_force_overflow_stw
;
}
ForceOverflowSettings
*
force_overflow
()
{
if
(
concurrent
())
{
return
force_overflow_conc
();
}
else
{
return
force_overflow_stw
();
}
}
public:
public:
// Manipulation of the global mark stack.
// Manipulation of the global mark stack.
// Notice that the first mark_stack_push is CAS-based, whereas the
// Notice that the first mark_stack_push is CAS-based, whereas the
...
...
src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp
浏览文件 @
ea1f4051
...
@@ -4961,36 +4961,45 @@ public:
...
@@ -4961,36 +4961,45 @@ public:
#ifndef PRODUCT
#ifndef PRODUCT
class
G1VerifyCardTableCleanup
:
public
HeapRegionClosure
{
class
G1VerifyCardTableCleanup
:
public
HeapRegionClosure
{
G1CollectedHeap
*
_g1h
;
CardTableModRefBS
*
_ct_bs
;
CardTableModRefBS
*
_ct_bs
;
public:
public:
G1VerifyCardTableCleanup
(
CardTableModRefBS
*
ct_bs
)
G1VerifyCardTableCleanup
(
G1CollectedHeap
*
g1h
,
CardTableModRefBS
*
ct_bs
)
:
_ct_bs
(
ct_bs
)
{
}
:
_
g1h
(
g1h
),
_
ct_bs
(
ct_bs
)
{
}
virtual
bool
doHeapRegion
(
HeapRegion
*
r
)
{
virtual
bool
doHeapRegion
(
HeapRegion
*
r
)
{
MemRegion
mr
(
r
->
bottom
(),
r
->
end
());
if
(
r
->
is_survivor
())
{
if
(
r
->
is_survivor
())
{
_
ct_bs
->
verify_dirty_region
(
m
r
);
_
g1h
->
verify_dirty_region
(
r
);
}
else
{
}
else
{
_
ct_bs
->
verify_clean_region
(
m
r
);
_
g1h
->
verify_not_dirty_region
(
r
);
}
}
return
false
;
return
false
;
}
}
};
};
void
G1CollectedHeap
::
verify_not_dirty_region
(
HeapRegion
*
hr
)
{
// All of the region should be clean.
CardTableModRefBS
*
ct_bs
=
(
CardTableModRefBS
*
)
barrier_set
();
MemRegion
mr
(
hr
->
bottom
(),
hr
->
end
());
ct_bs
->
verify_not_dirty_region
(
mr
);
}
void
G1CollectedHeap
::
verify_dirty_region
(
HeapRegion
*
hr
)
{
// We cannot guarantee that [bottom(),end()] is dirty. Threads
// dirty allocated blocks as they allocate them. The thread that
// retires each region and replaces it with a new one will do a
// maximal allocation to fill in [pre_dummy_top(),end()] but will
// not dirty that area (one less thing to have to do while holding
// a lock). So we can only verify that [bottom(),pre_dummy_top()]
// is dirty.
CardTableModRefBS
*
ct_bs
=
(
CardTableModRefBS
*
)
barrier_set
();
MemRegion
mr
(
hr
->
bottom
(),
hr
->
pre_dummy_top
());
ct_bs
->
verify_dirty_region
(
mr
);
}
void
G1CollectedHeap
::
verify_dirty_young_list
(
HeapRegion
*
head
)
{
void
G1CollectedHeap
::
verify_dirty_young_list
(
HeapRegion
*
head
)
{
CardTableModRefBS
*
ct_bs
=
(
CardTableModRefBS
*
)
(
barrier_set
()
);
CardTableModRefBS
*
ct_bs
=
(
CardTableModRefBS
*
)
barrier_set
(
);
for
(
HeapRegion
*
hr
=
head
;
hr
!=
NULL
;
hr
=
hr
->
get_next_young_region
())
{
for
(
HeapRegion
*
hr
=
head
;
hr
!=
NULL
;
hr
=
hr
->
get_next_young_region
())
{
// We cannot guarantee that [bottom(),end()] is dirty. Threads
verify_dirty_region
(
hr
);
// dirty allocated blocks as they allocate them. The thread that
// retires each region and replaces it with a new one will do a
// maximal allocation to fill in [pre_dummy_top(),end()] but will
// not dirty that area (one less thing to have to do while holding
// a lock). So we can only verify that [bottom(),pre_dummy_top()]
// is dirty. Also note that verify_dirty_region() requires
// mr.start() and mr.end() to be card aligned and pre_dummy_top()
// is not guaranteed to be.
MemRegion
mr
(
hr
->
bottom
(),
ct_bs
->
align_to_card_boundary
(
hr
->
pre_dummy_top
()));
ct_bs
->
verify_dirty_region
(
mr
);
}
}
}
}
...
@@ -5033,7 +5042,7 @@ void G1CollectedHeap::cleanUpCardTable() {
...
@@ -5033,7 +5042,7 @@ void G1CollectedHeap::cleanUpCardTable() {
g1_policy
()
->
record_clear_ct_time
(
elapsed
*
1000.0
);
g1_policy
()
->
record_clear_ct_time
(
elapsed
*
1000.0
);
#ifndef PRODUCT
#ifndef PRODUCT
if
(
G1VerifyCTCleanup
||
VerifyAfterGC
)
{
if
(
G1VerifyCTCleanup
||
VerifyAfterGC
)
{
G1VerifyCardTableCleanup
cleanup_verifier
(
ct_bs
);
G1VerifyCardTableCleanup
cleanup_verifier
(
this
,
ct_bs
);
heap_region_iterate
(
&
cleanup_verifier
);
heap_region_iterate
(
&
cleanup_verifier
);
}
}
#endif
#endif
...
...
src/share/vm/gc_implementation/g1/g1CollectedHeap.hpp
浏览文件 @
ea1f4051
...
@@ -970,6 +970,8 @@ public:
...
@@ -970,6 +970,8 @@ public:
// The number of regions available for "regular" expansion.
// The number of regions available for "regular" expansion.
size_t
expansion_regions
()
{
return
_expansion_regions
;
}
size_t
expansion_regions
()
{
return
_expansion_regions
;
}
void
verify_not_dirty_region
(
HeapRegion
*
hr
)
PRODUCT_RETURN
;
void
verify_dirty_region
(
HeapRegion
*
hr
)
PRODUCT_RETURN
;
void
verify_dirty_young_list
(
HeapRegion
*
head
)
PRODUCT_RETURN
;
void
verify_dirty_young_list
(
HeapRegion
*
head
)
PRODUCT_RETURN
;
void
verify_dirty_young_regions
()
PRODUCT_RETURN
;
void
verify_dirty_young_regions
()
PRODUCT_RETURN
;
...
...
src/share/vm/gc_implementation/g1/g1RemSet.cpp
浏览文件 @
ea1f4051
...
@@ -157,7 +157,6 @@ public:
...
@@ -157,7 +157,6 @@ public:
void
set_try_claimed
()
{
_try_claimed
=
true
;
}
void
set_try_claimed
()
{
_try_claimed
=
true
;
}
void
scanCard
(
size_t
index
,
HeapRegion
*
r
)
{
void
scanCard
(
size_t
index
,
HeapRegion
*
r
)
{
_cards_done
++
;
DirtyCardToOopClosure
*
cl
=
DirtyCardToOopClosure
*
cl
=
r
->
new_dcto_closure
(
_oc
,
r
->
new_dcto_closure
(
_oc
,
CardTableModRefBS
::
Precise
,
CardTableModRefBS
::
Precise
,
...
@@ -168,17 +167,14 @@ public:
...
@@ -168,17 +167,14 @@ public:
HeapWord
*
card_start
=
_bot_shared
->
address_for_index
(
index
);
HeapWord
*
card_start
=
_bot_shared
->
address_for_index
(
index
);
HeapWord
*
card_end
=
card_start
+
G1BlockOffsetSharedArray
::
N_words
;
HeapWord
*
card_end
=
card_start
+
G1BlockOffsetSharedArray
::
N_words
;
Space
*
sp
=
SharedHeap
::
heap
()
->
space_containing
(
card_start
);
Space
*
sp
=
SharedHeap
::
heap
()
->
space_containing
(
card_start
);
MemRegion
sm_region
;
MemRegion
sm_region
=
sp
->
used_region_at_save_marks
();
if
(
ParallelGCThreads
>
0
)
{
// first find the used area
sm_region
=
sp
->
used_region_at_save_marks
();
}
else
{
// The closure is not idempotent. We shouldn't look at objects
// allocated during the GC.
sm_region
=
sp
->
used_region_at_save_marks
();
}
MemRegion
mr
=
sm_region
.
intersection
(
MemRegion
(
card_start
,
card_end
));
MemRegion
mr
=
sm_region
.
intersection
(
MemRegion
(
card_start
,
card_end
));
if
(
!
mr
.
is_empty
())
{
if
(
!
mr
.
is_empty
()
&&
!
_ct_bs
->
is_card_claimed
(
index
))
{
// We make the card as "claimed" lazily (so races are possible
// but they're benign), which reduces the number of duplicate
// scans (the rsets of the regions in the cset can intersect).
_ct_bs
->
set_card_claimed
(
index
);
_cards_done
++
;
cl
->
do_MemRegion
(
mr
);
cl
->
do_MemRegion
(
mr
);
}
}
}
}
...
@@ -199,6 +195,9 @@ public:
...
@@ -199,6 +195,9 @@ public:
HeapRegionRemSet
*
hrrs
=
r
->
rem_set
();
HeapRegionRemSet
*
hrrs
=
r
->
rem_set
();
if
(
hrrs
->
iter_is_complete
())
return
false
;
// All done.
if
(
hrrs
->
iter_is_complete
())
return
false
;
// All done.
if
(
!
_try_claimed
&&
!
hrrs
->
claim_iter
())
return
false
;
if
(
!
_try_claimed
&&
!
hrrs
->
claim_iter
())
return
false
;
// If we ever free the collection set concurrently, we should also
// clear the card table concurrently therefore we won't need to
// add regions of the collection set to the dirty cards region.
_g1h
->
push_dirty_cards_region
(
r
);
_g1h
->
push_dirty_cards_region
(
r
);
// If we didn't return above, then
// If we didn't return above, then
// _try_claimed || r->claim_iter()
// _try_claimed || r->claim_iter()
...
@@ -230,15 +229,10 @@ public:
...
@@ -230,15 +229,10 @@ public:
_g1h
->
push_dirty_cards_region
(
card_region
);
_g1h
->
push_dirty_cards_region
(
card_region
);
}
}
// If the card is dirty, then we will scan it during updateRS.
// If the card is dirty, then we will scan it during updateRS.
if
(
!
card_region
->
in_collection_set
()
&&
!
_ct_bs
->
is_card_dirty
(
card_index
))
{
if
(
!
card_region
->
in_collection_set
()
&&
// We make the card as "claimed" lazily (so races are possible but they're benign),
!
_ct_bs
->
is_card_dirty
(
card_index
))
{
// which reduces the number of duplicate scans (the rsets of the regions in the cset
scanCard
(
card_index
,
card_region
);
// can intersect).
if
(
!
_ct_bs
->
is_card_claimed
(
card_index
))
{
_ct_bs
->
set_card_claimed
(
card_index
);
scanCard
(
card_index
,
card_region
);
}
}
}
}
}
if
(
!
_try_claimed
)
{
if
(
!
_try_claimed
)
{
...
@@ -246,8 +240,6 @@ public:
...
@@ -246,8 +240,6 @@ public:
}
}
return
false
;
return
false
;
}
}
// Set all cards back to clean.
void
cleanup
()
{
_g1h
->
cleanUpCardTable
();}
size_t
cards_done
()
{
return
_cards_done
;}
size_t
cards_done
()
{
return
_cards_done
;}
size_t
cards_looked_up
()
{
return
_cards
;}
size_t
cards_looked_up
()
{
return
_cards
;}
};
};
...
@@ -566,8 +558,9 @@ public:
...
@@ -566,8 +558,9 @@ public:
update_rs_cl
.
set_region
(
r
);
update_rs_cl
.
set_region
(
r
);
HeapWord
*
stop_point
=
HeapWord
*
stop_point
=
r
->
oops_on_card_seq_iterate_careful
(
scanRegion
,
r
->
oops_on_card_seq_iterate_careful
(
scanRegion
,
&
filter_then_update_rs_cset_oop_cl
,
&
filter_then_update_rs_cset_oop_cl
,
false
/* filter_young */
);
false
/* filter_young */
,
NULL
/* card_ptr */
);
// Since this is performed in the event of an evacuation failure, we
// Since this is performed in the event of an evacuation failure, we
// we shouldn't see a non-null stop point
// we shouldn't see a non-null stop point
...
@@ -735,12 +728,6 @@ bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i,
...
@@ -735,12 +728,6 @@ bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i,
(
OopClosure
*
)
&
mux
:
(
OopClosure
*
)
&
mux
:
(
OopClosure
*
)
&
update_rs_oop_cl
));
(
OopClosure
*
)
&
update_rs_oop_cl
));
// Undirty the card.
*
card_ptr
=
CardTableModRefBS
::
clean_card_val
();
// We must complete this write before we do any of the reads below.
OrderAccess
::
storeload
();
// And process it, being careful of unallocated portions of TLAB's.
// The region for the current card may be a young region. The
// The region for the current card may be a young region. The
// current card may have been a card that was evicted from the
// current card may have been a card that was evicted from the
// card cache. When the card was inserted into the cache, we had
// card cache. When the card was inserted into the cache, we had
...
@@ -749,7 +736,7 @@ bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i,
...
@@ -749,7 +736,7 @@ bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i,
// and tagged as young.
// and tagged as young.
//
//
// We wish to filter out cards for such a region but the current
// We wish to filter out cards for such a region but the current
// thread, if we're running con
uc
rrently, may "see" the young type
// thread, if we're running con
cu
rrently, may "see" the young type
// change at any time (so an earlier "is_young" check may pass or
// change at any time (so an earlier "is_young" check may pass or
// fail arbitrarily). We tell the iteration code to perform this
// fail arbitrarily). We tell the iteration code to perform this
// filtering when it has been determined that there has been an actual
// filtering when it has been determined that there has been an actual
...
@@ -759,7 +746,8 @@ bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i,
...
@@ -759,7 +746,8 @@ bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i,
HeapWord
*
stop_point
=
HeapWord
*
stop_point
=
r
->
oops_on_card_seq_iterate_careful
(
dirtyRegion
,
r
->
oops_on_card_seq_iterate_careful
(
dirtyRegion
,
&
filter_then_update_rs_oop_cl
,
&
filter_then_update_rs_oop_cl
,
filter_young
);
filter_young
,
card_ptr
);
// If stop_point is non-null, then we encountered an unallocated region
// If stop_point is non-null, then we encountered an unallocated region
// (perhaps the unfilled portion of a TLAB.) For now, we'll dirty the
// (perhaps the unfilled portion of a TLAB.) For now, we'll dirty the
...
...
src/share/vm/gc_implementation/g1/g1_globals.hpp
浏览文件 @
ea1f4051
...
@@ -311,7 +311,11 @@
...
@@ -311,7 +311,11 @@
\
\
develop(bool, G1ExitOnExpansionFailure, false, \
develop(bool, G1ExitOnExpansionFailure, false, \
"Raise a fatal VM exit out of memory failure in the event " \
"Raise a fatal VM exit out of memory failure in the event " \
" that heap expansion fails due to running out of swap.")
" that heap expansion fails due to running out of swap.") \
\
develop(uintx, G1ConcMarkForceOverflow, 0, \
"The number of times we'll force an overflow during " \
"concurrent marking")
G1_FLAGS
(
DECLARE_DEVELOPER_FLAG
,
DECLARE_PD_DEVELOPER_FLAG
,
DECLARE_PRODUCT_FLAG
,
DECLARE_PD_PRODUCT_FLAG
,
DECLARE_DIAGNOSTIC_FLAG
,
DECLARE_EXPERIMENTAL_FLAG
,
DECLARE_NOTPRODUCT_FLAG
,
DECLARE_MANAGEABLE_FLAG
,
DECLARE_PRODUCT_RW_FLAG
)
G1_FLAGS
(
DECLARE_DEVELOPER_FLAG
,
DECLARE_PD_DEVELOPER_FLAG
,
DECLARE_PRODUCT_FLAG
,
DECLARE_PD_PRODUCT_FLAG
,
DECLARE_DIAGNOSTIC_FLAG
,
DECLARE_EXPERIMENTAL_FLAG
,
DECLARE_NOTPRODUCT_FLAG
,
DECLARE_MANAGEABLE_FLAG
,
DECLARE_PRODUCT_RW_FLAG
)
...
...
src/share/vm/gc_implementation/g1/heapRegion.cpp
浏览文件 @
ea1f4051
...
@@ -376,6 +376,17 @@ void HeapRegion::hr_clear(bool par, bool clear_space) {
...
@@ -376,6 +376,17 @@ void HeapRegion::hr_clear(bool par, bool clear_space) {
if
(
clear_space
)
clear
(
SpaceDecorator
::
Mangle
);
if
(
clear_space
)
clear
(
SpaceDecorator
::
Mangle
);
}
}
void
HeapRegion
::
par_clear
()
{
assert
(
used
()
==
0
,
"the region should have been already cleared"
);
assert
(
capacity
()
==
(
size_t
)
HeapRegion
::
GrainBytes
,
"should be back to normal"
);
HeapRegionRemSet
*
hrrs
=
rem_set
();
hrrs
->
clear
();
CardTableModRefBS
*
ct_bs
=
(
CardTableModRefBS
*
)
G1CollectedHeap
::
heap
()
->
barrier_set
();
ct_bs
->
clear
(
MemRegion
(
bottom
(),
end
()));
}
// <PREDICTION>
// <PREDICTION>
void
HeapRegion
::
calc_gc_efficiency
()
{
void
HeapRegion
::
calc_gc_efficiency
()
{
G1CollectedHeap
*
g1h
=
G1CollectedHeap
::
heap
();
G1CollectedHeap
*
g1h
=
G1CollectedHeap
::
heap
();
...
@@ -600,7 +611,15 @@ HeapWord*
...
@@ -600,7 +611,15 @@ HeapWord*
HeapRegion
::
HeapRegion
::
oops_on_card_seq_iterate_careful
(
MemRegion
mr
,
oops_on_card_seq_iterate_careful
(
MemRegion
mr
,
FilterOutOfRegionClosure
*
cl
,
FilterOutOfRegionClosure
*
cl
,
bool
filter_young
)
{
bool
filter_young
,
jbyte
*
card_ptr
)
{
// Currently, we should only have to clean the card if filter_young
// is true and vice versa.
if
(
filter_young
)
{
assert
(
card_ptr
!=
NULL
,
"pre-condition"
);
}
else
{
assert
(
card_ptr
==
NULL
,
"pre-condition"
);
}
G1CollectedHeap
*
g1h
=
G1CollectedHeap
::
heap
();
G1CollectedHeap
*
g1h
=
G1CollectedHeap
::
heap
();
// If we're within a stop-world GC, then we might look at a card in a
// If we're within a stop-world GC, then we might look at a card in a
...
@@ -626,6 +645,15 @@ oops_on_card_seq_iterate_careful(MemRegion mr,
...
@@ -626,6 +645,15 @@ oops_on_card_seq_iterate_careful(MemRegion mr,
assert
(
!
is_young
(),
"check value of filter_young"
);
assert
(
!
is_young
(),
"check value of filter_young"
);
// We can only clean the card here, after we make the decision that
// the card is not young. And we only clean the card if we have been
// asked to (i.e., card_ptr != NULL).
if
(
card_ptr
!=
NULL
)
{
*
card_ptr
=
CardTableModRefBS
::
clean_card_val
();
// We must complete this write before we do any of the reads below.
OrderAccess
::
storeload
();
}
// We used to use "block_start_careful" here. But we're actually happy
// We used to use "block_start_careful" here. But we're actually happy
// to update the BOT while we do this...
// to update the BOT while we do this...
HeapWord
*
cur
=
block_start
(
mr
.
start
());
HeapWord
*
cur
=
block_start
(
mr
.
start
());
...
...
src/share/vm/gc_implementation/g1/heapRegion.hpp
浏览文件 @
ea1f4051
...
@@ -584,6 +584,7 @@ class HeapRegion: public G1OffsetTableContigSpace {
...
@@ -584,6 +584,7 @@ class HeapRegion: public G1OffsetTableContigSpace {
// Reset HR stuff to default values.
// Reset HR stuff to default values.
void
hr_clear
(
bool
par
,
bool
clear_space
);
void
hr_clear
(
bool
par
,
bool
clear_space
);
void
par_clear
();
void
initialize
(
MemRegion
mr
,
bool
clear_space
,
bool
mangle_space
);
void
initialize
(
MemRegion
mr
,
bool
clear_space
,
bool
mangle_space
);
...
@@ -802,12 +803,16 @@ class HeapRegion: public G1OffsetTableContigSpace {
...
@@ -802,12 +803,16 @@ class HeapRegion: public G1OffsetTableContigSpace {
HeapWord
*
HeapWord
*
object_iterate_mem_careful
(
MemRegion
mr
,
ObjectClosure
*
cl
);
object_iterate_mem_careful
(
MemRegion
mr
,
ObjectClosure
*
cl
);
// In this version - if filter_young is true and the region
// filter_young: if true and the region is a young region then we
// is a young region then we skip the iteration.
// skip the iteration.
// card_ptr: if not NULL, and we decide that the card is not young
// and we iterate over it, we'll clean the card before we start the
// iteration.
HeapWord
*
HeapWord
*
oops_on_card_seq_iterate_careful
(
MemRegion
mr
,
oops_on_card_seq_iterate_careful
(
MemRegion
mr
,
FilterOutOfRegionClosure
*
cl
,
FilterOutOfRegionClosure
*
cl
,
bool
filter_young
);
bool
filter_young
,
jbyte
*
card_ptr
);
// A version of block start that is guaranteed to find *some* block
// A version of block start that is guaranteed to find *some* block
// boundary at or before "p", but does not object iteration, and may
// boundary at or before "p", but does not object iteration, and may
...
...
src/share/vm/memory/cardTableModRefBS.cpp
浏览文件 @
ea1f4051
...
@@ -652,43 +652,37 @@ void CardTableModRefBS::verify() {
...
@@ -652,43 +652,37 @@ void CardTableModRefBS::verify() {
}
}
#ifndef PRODUCT
#ifndef PRODUCT
class
GuaranteeNotModClosure
:
public
MemRegionClosure
{
void
CardTableModRefBS
::
verify_region
(
MemRegion
mr
,
CardTableModRefBS
*
_ct
;
jbyte
val
,
bool
val_equals
)
{
public:
jbyte
*
start
=
byte_for
(
mr
.
start
());
GuaranteeNotModClosure
(
CardTableModRefBS
*
ct
)
:
_ct
(
ct
)
{}
jbyte
*
end
=
byte_for
(
mr
.
last
());
void
do_MemRegion
(
MemRegion
mr
)
{
bool
failures
=
false
;
jbyte
*
entry
=
_ct
->
byte_for
(
mr
.
start
());
for
(
jbyte
*
curr
=
start
;
curr
<=
end
;
++
curr
)
{
guarantee
(
*
entry
!=
CardTableModRefBS
::
clean_card
,
jbyte
curr_val
=
*
curr
;
"Dirty card in region that should be clean"
);
bool
failed
=
(
val_equals
)
?
(
curr_val
!=
val
)
:
(
curr_val
==
val
);
if
(
failed
)
{
if
(
!
failures
)
{
tty
->
cr
();
tty
->
print_cr
(
"== CT verification failed: ["
PTR_FORMAT
","
PTR_FORMAT
"]"
);
tty
->
print_cr
(
"== %sexpecting value: %d"
,
(
val_equals
)
?
""
:
"not "
,
val
);
failures
=
true
;
}
tty
->
print_cr
(
"== card "
PTR_FORMAT
" ["
PTR_FORMAT
","
PTR_FORMAT
"], "
"val: %d"
,
curr
,
addr_for
(
curr
),
(
HeapWord
*
)
(((
size_t
)
addr_for
(
curr
))
+
card_size
),
(
int
)
curr_val
);
}
}
}
};
guarantee
(
!
failures
,
"there should not have been any failures"
);
void
CardTableModRefBS
::
verify_clean_region
(
MemRegion
mr
)
{
GuaranteeNotModClosure
blk
(
this
);
non_clean_card_iterate_serial
(
mr
,
&
blk
);
}
}
// To verify a MemRegion is entirely dirty this closure is passed to
void
CardTableModRefBS
::
verify_not_dirty_region
(
MemRegion
mr
)
{
// dirty_card_iterate. If the region is dirty do_MemRegion will be
verify_region
(
mr
,
dirty_card
,
false
/* val_equals */
);
// invoked only once with a MemRegion equal to the one being
}
// verified.
class
GuaranteeDirtyClosure
:
public
MemRegionClosure
{
CardTableModRefBS
*
_ct
;
MemRegion
_mr
;
bool
_result
;
public:
GuaranteeDirtyClosure
(
CardTableModRefBS
*
ct
,
MemRegion
mr
)
:
_ct
(
ct
),
_mr
(
mr
),
_result
(
false
)
{}
void
do_MemRegion
(
MemRegion
mr
)
{
_result
=
_mr
.
equals
(
mr
);
}
bool
result
()
const
{
return
_result
;
}
};
void
CardTableModRefBS
::
verify_dirty_region
(
MemRegion
mr
)
{
void
CardTableModRefBS
::
verify_dirty_region
(
MemRegion
mr
)
{
GuaranteeDirtyClosure
blk
(
this
,
mr
);
verify_region
(
mr
,
dirty_card
,
true
/* val_equals */
);
dirty_card_iterate
(
mr
,
&
blk
);
guarantee
(
blk
.
result
(),
"Non-dirty cards in region that should be dirty"
);
}
}
#endif
#endif
...
...
src/share/vm/memory/cardTableModRefBS.hpp
浏览文件 @
ea1f4051
...
@@ -475,7 +475,10 @@ public:
...
@@ -475,7 +475,10 @@ public:
void
verify
();
void
verify
();
void
verify_guard
();
void
verify_guard
();
void
verify_clean_region
(
MemRegion
mr
)
PRODUCT_RETURN
;
// val_equals -> it will check that all cards covered by mr equal val
// !val_equals -> it will check that all cards covered by mr do not equal val
void
verify_region
(
MemRegion
mr
,
jbyte
val
,
bool
val_equals
)
PRODUCT_RETURN
;
void
verify_not_dirty_region
(
MemRegion
mr
)
PRODUCT_RETURN
;
void
verify_dirty_region
(
MemRegion
mr
)
PRODUCT_RETURN
;
void
verify_dirty_region
(
MemRegion
mr
)
PRODUCT_RETURN
;
static
size_t
par_chunk_heapword_alignment
()
{
static
size_t
par_chunk_heapword_alignment
()
{
...
...
src/share/vm/memory/modRefBarrierSet.hpp
浏览文件 @
ea1f4051
...
@@ -100,12 +100,6 @@ public:
...
@@ -100,12 +100,6 @@ public:
// Pass along the argument to the superclass.
// Pass along the argument to the superclass.
ModRefBarrierSet
(
int
max_covered_regions
)
:
ModRefBarrierSet
(
int
max_covered_regions
)
:
BarrierSet
(
max_covered_regions
)
{}
BarrierSet
(
max_covered_regions
)
{}
#ifndef PRODUCT
// Verifies that the given region contains no modified references.
virtual
void
verify_clean_region
(
MemRegion
mr
)
=
0
;
#endif
};
};
#endif // SHARE_VM_MEMORY_MODREFBARRIERSET_HPP
#endif // SHARE_VM_MEMORY_MODREFBARRIERSET_HPP
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录