Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openanolis
dragonwell8_hotspot
提交
869749fb
D
dragonwell8_hotspot
项目概览
openanolis
/
dragonwell8_hotspot
通知
2
Star
2
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
dragonwell8_hotspot
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
869749fb
编写于
10月 11, 2013
作者:
C
ccheung
浏览文件
操作
浏览文件
下载
差异文件
Merge
上级
d2b22862
06dfd752
变更
36
展开全部
隐藏空白更改
内联
并排
Showing
36 changed file
with
2181 addition
and
1035 deletion
+2181
-1035
.hgtags
.hgtags
+2
-0
agent/src/share/classes/sun/jvm/hotspot/memory/ProtectionDomainCacheEntry.java
...es/sun/jvm/hotspot/memory/ProtectionDomainCacheEntry.java
+56
-0
agent/src/share/classes/sun/jvm/hotspot/memory/ProtectionDomainEntry.java
...classes/sun/jvm/hotspot/memory/ProtectionDomainEntry.java
+7
-5
make/hotspot_version
make/hotspot_version
+1
-1
src/cpu/sparc/vm/c1_Runtime1_sparc.cpp
src/cpu/sparc/vm/c1_Runtime1_sparc.cpp
+10
-1
src/cpu/sparc/vm/macroAssembler_sparc.cpp
src/cpu/sparc/vm/macroAssembler_sparc.cpp
+7
-1
src/cpu/x86/vm/c1_Runtime1_x86.cpp
src/cpu/x86/vm/c1_Runtime1_x86.cpp
+9
-2
src/cpu/x86/vm/macroAssembler_x86.cpp
src/cpu/x86/vm/macroAssembler_x86.cpp
+7
-2
src/os/linux/vm/globals_linux.hpp
src/os/linux/vm/globals_linux.hpp
+1
-1
src/os/linux/vm/os_linux.cpp
src/os/linux/vm/os_linux.cpp
+21
-9
src/share/vm/classfile/dictionary.cpp
src/share/vm/classfile/dictionary.cpp
+172
-18
src/share/vm/classfile/dictionary.hpp
src/share/vm/classfile/dictionary.hpp
+127
-12
src/share/vm/classfile/systemDictionary.cpp
src/share/vm/classfile/systemDictionary.cpp
+27
-1
src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp
src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp
+5
-1
src/share/vm/gc_implementation/g1/g1CollectedHeap.inline.hpp
src/share/vm/gc_implementation/g1/g1CollectedHeap.inline.hpp
+2
-1
src/share/vm/gc_implementation/g1/g1CollectorPolicy.cpp
src/share/vm/gc_implementation/g1/g1CollectorPolicy.cpp
+2
-2
src/share/vm/gc_implementation/g1/g1SATBCardTableModRefBS.cpp
...share/vm/gc_implementation/g1/g1SATBCardTableModRefBS.cpp
+52
-19
src/share/vm/gc_implementation/g1/g1SATBCardTableModRefBS.hpp
...share/vm/gc_implementation/g1/g1SATBCardTableModRefBS.hpp
+10
-0
src/share/vm/gc_implementation/g1/ptrQueue.hpp
src/share/vm/gc_implementation/g1/ptrQueue.hpp
+4
-0
src/share/vm/gc_implementation/shared/vmGCOperations.hpp
src/share/vm/gc_implementation/shared/vmGCOperations.hpp
+0
-3
src/share/vm/gc_interface/collectedHeap.cpp
src/share/vm/gc_interface/collectedHeap.cpp
+0
-6
src/share/vm/gc_interface/collectedHeap.hpp
src/share/vm/gc_interface/collectedHeap.hpp
+0
-5
src/share/vm/memory/collectorPolicy.cpp
src/share/vm/memory/collectorPolicy.cpp
+96
-141
src/share/vm/memory/collectorPolicy.hpp
src/share/vm/memory/collectorPolicy.hpp
+4
-15
src/share/vm/memory/filemap.hpp
src/share/vm/memory/filemap.hpp
+1
-0
src/share/vm/memory/metaspace.cpp
src/share/vm/memory/metaspace.cpp
+484
-276
src/share/vm/memory/metaspace.hpp
src/share/vm/memory/metaspace.hpp
+35
-28
src/share/vm/opto/graphKit.cpp
src/share/vm/opto/graphKit.cpp
+12
-3
src/share/vm/runtime/arguments.cpp
src/share/vm/runtime/arguments.cpp
+8
-5
src/share/vm/runtime/globals.hpp
src/share/vm/runtime/globals.hpp
+529
-469
src/share/vm/runtime/virtualspace.cpp
src/share/vm/runtime/virtualspace.cpp
+81
-5
src/share/vm/runtime/virtualspace.hpp
src/share/vm/runtime/virtualspace.hpp
+1
-0
src/share/vm/runtime/vmStructs.cpp
src/share/vm/runtime/vmStructs.cpp
+9
-2
src/share/vm/services/memoryService.hpp
src/share/vm/services/memoryService.hpp
+6
-0
src/share/vm/utilities/globalDefinitions.hpp
src/share/vm/utilities/globalDefinitions.hpp
+4
-1
test/runtime/memory/LargePages/TestLargePagesFlags.java
test/runtime/memory/LargePages/TestLargePagesFlags.java
+389
-0
未找到文件。
.hgtags
浏览文件 @
869749fb
...
@@ -381,3 +381,5 @@ a09fe9d1e016c285307507a5793bc4fa6215e9c9 hs25-b50
...
@@ -381,3 +381,5 @@ a09fe9d1e016c285307507a5793bc4fa6215e9c9 hs25-b50
566db1b0e6efca31f181456e54c8911d0192410d hs25-b51
566db1b0e6efca31f181456e54c8911d0192410d hs25-b51
c81dd5393a5e333df7cb1f6621f5897ada6522b5 jdk8-b109
c81dd5393a5e333df7cb1f6621f5897ada6522b5 jdk8-b109
58043478c26d4e8bf48700acea5f97aba8b417d4 hs25-b52
58043478c26d4e8bf48700acea5f97aba8b417d4 hs25-b52
6209b0ed51c086d4127bac0e086c8f326d1764d7 jdk8-b110
562a3d356de67670b4172b82aca2d30743449e04 hs25-b53
agent/src/share/classes/sun/jvm/hotspot/memory/ProtectionDomainCacheEntry.java
0 → 100644
浏览文件 @
869749fb
/*
* Copyright (c) 2001, 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
package
sun.jvm.hotspot.memory
;
import
java.util.*
;
import
sun.jvm.hotspot.debugger.*
;
import
sun.jvm.hotspot.oops.*
;
import
sun.jvm.hotspot.runtime.*
;
import
sun.jvm.hotspot.types.*
;
public
class
ProtectionDomainCacheEntry
extends
VMObject
{
private
static
sun
.
jvm
.
hotspot
.
types
.
OopField
protectionDomainField
;
static
{
VM
.
registerVMInitializedObserver
(
new
Observer
()
{
public
void
update
(
Observable
o
,
Object
data
)
{
initialize
(
VM
.
getVM
().
getTypeDataBase
());
}
});
}
private
static
synchronized
void
initialize
(
TypeDataBase
db
)
{
Type
type
=
db
.
lookupType
(
"ProtectionDomainCacheEntry"
);
protectionDomainField
=
type
.
getOopField
(
"_literal"
);
}
public
ProtectionDomainCacheEntry
(
Address
addr
)
{
super
(
addr
);
}
public
Oop
protectionDomain
()
{
return
VM
.
getVM
().
getObjectHeap
().
newOop
(
protectionDomainField
.
getValue
(
addr
));
}
}
agent/src/share/classes/sun/jvm/hotspot/memory/ProtectionDomainEntry.java
浏览文件 @
869749fb
/*
/*
* Copyright (c) 2001, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2001,
2013,
Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
*
* This code is free software; you can redistribute it and/or modify it
* This code is free software; you can redistribute it and/or modify it
...
@@ -32,7 +32,7 @@ import sun.jvm.hotspot.types.*;
...
@@ -32,7 +32,7 @@ import sun.jvm.hotspot.types.*;
public
class
ProtectionDomainEntry
extends
VMObject
{
public
class
ProtectionDomainEntry
extends
VMObject
{
private
static
AddressField
nextField
;
private
static
AddressField
nextField
;
private
static
sun
.
jvm
.
hotspot
.
types
.
OopField
protectionDomain
Field
;
private
static
AddressField
pdCache
Field
;
static
{
static
{
VM
.
registerVMInitializedObserver
(
new
Observer
()
{
VM
.
registerVMInitializedObserver
(
new
Observer
()
{
...
@@ -46,7 +46,7 @@ public class ProtectionDomainEntry extends VMObject {
...
@@ -46,7 +46,7 @@ public class ProtectionDomainEntry extends VMObject {
Type
type
=
db
.
lookupType
(
"ProtectionDomainEntry"
);
Type
type
=
db
.
lookupType
(
"ProtectionDomainEntry"
);
nextField
=
type
.
getAddressField
(
"_next"
);
nextField
=
type
.
getAddressField
(
"_next"
);
p
rotectionDomainField
=
type
.
getOopField
(
"_protection_domain
"
);
p
dCacheField
=
type
.
getAddressField
(
"_pd_cache
"
);
}
}
public
ProtectionDomainEntry
(
Address
addr
)
{
public
ProtectionDomainEntry
(
Address
addr
)
{
...
@@ -54,10 +54,12 @@ public class ProtectionDomainEntry extends VMObject {
...
@@ -54,10 +54,12 @@ public class ProtectionDomainEntry extends VMObject {
}
}
public
ProtectionDomainEntry
next
()
{
public
ProtectionDomainEntry
next
()
{
return
(
ProtectionDomainEntry
)
VMObjectFactory
.
newObject
(
ProtectionDomainEntry
.
class
,
addr
);
return
(
ProtectionDomainEntry
)
VMObjectFactory
.
newObject
(
ProtectionDomainEntry
.
class
,
nextField
.
getValue
(
addr
)
);
}
}
public
Oop
protectionDomain
()
{
public
Oop
protectionDomain
()
{
return
VM
.
getVM
().
getObjectHeap
().
newOop
(
protectionDomainField
.
getValue
(
addr
));
ProtectionDomainCacheEntry
pd_cache
=
(
ProtectionDomainCacheEntry
)
VMObjectFactory
.
newObject
(
ProtectionDomainCacheEntry
.
class
,
pdCacheField
.
getValue
(
addr
));
return
pd_cache
.
protectionDomain
();
}
}
}
}
make/hotspot_version
浏览文件 @
869749fb
...
@@ -35,7 +35,7 @@ HOTSPOT_VM_COPYRIGHT=Copyright 2013
...
@@ -35,7 +35,7 @@ HOTSPOT_VM_COPYRIGHT=Copyright 2013
HS_MAJOR_VER=25
HS_MAJOR_VER=25
HS_MINOR_VER=0
HS_MINOR_VER=0
HS_BUILD_NUMBER=5
3
HS_BUILD_NUMBER=5
4
JDK_MAJOR_VER=1
JDK_MAJOR_VER=1
JDK_MINOR_VER=8
JDK_MINOR_VER=8
...
...
src/cpu/sparc/vm/c1_Runtime1_sparc.cpp
浏览文件 @
869749fb
...
@@ -37,6 +37,9 @@
...
@@ -37,6 +37,9 @@
#include "runtime/vframeArray.hpp"
#include "runtime/vframeArray.hpp"
#include "utilities/macros.hpp"
#include "utilities/macros.hpp"
#include "vmreg_sparc.inline.hpp"
#include "vmreg_sparc.inline.hpp"
#if INCLUDE_ALL_GCS
#include "gc_implementation/g1/g1SATBCardTableModRefBS.hpp"
#endif
// Implementation of StubAssembler
// Implementation of StubAssembler
...
@@ -912,7 +915,7 @@ OopMapSet* Runtime1::generate_code_for(StubID id, StubAssembler* sasm) {
...
@@ -912,7 +915,7 @@ OopMapSet* Runtime1::generate_code_for(StubID id, StubAssembler* sasm) {
Register
tmp2
=
G3_scratch
;
Register
tmp2
=
G3_scratch
;
jbyte
*
byte_map_base
=
((
CardTableModRefBS
*
)
bs
)
->
byte_map_base
;
jbyte
*
byte_map_base
=
((
CardTableModRefBS
*
)
bs
)
->
byte_map_base
;
Label
not_already_dirty
,
restart
,
refill
;
Label
not_already_dirty
,
restart
,
refill
,
young_card
;
#ifdef _LP64
#ifdef _LP64
__
srlx
(
addr
,
CardTableModRefBS
::
card_shift
,
addr
);
__
srlx
(
addr
,
CardTableModRefBS
::
card_shift
,
addr
);
...
@@ -924,9 +927,15 @@ OopMapSet* Runtime1::generate_code_for(StubID id, StubAssembler* sasm) {
...
@@ -924,9 +927,15 @@ OopMapSet* Runtime1::generate_code_for(StubID id, StubAssembler* sasm) {
__
set
(
rs
,
cardtable
);
// cardtable := <card table base>
__
set
(
rs
,
cardtable
);
// cardtable := <card table base>
__
ldub
(
addr
,
cardtable
,
tmp
);
// tmp := [addr + cardtable]
__
ldub
(
addr
,
cardtable
,
tmp
);
// tmp := [addr + cardtable]
__
cmp_and_br_short
(
tmp
,
G1SATBCardTableModRefBS
::
g1_young_card_val
(),
Assembler
::
equal
,
Assembler
::
pt
,
young_card
);
__
membar
(
Assembler
::
Membar_mask_bits
(
Assembler
::
StoreLoad
));
__
ldub
(
addr
,
cardtable
,
tmp
);
// tmp := [addr + cardtable]
assert
(
CardTableModRefBS
::
dirty_card_val
()
==
0
,
"otherwise check this code"
);
assert
(
CardTableModRefBS
::
dirty_card_val
()
==
0
,
"otherwise check this code"
);
__
cmp_and_br_short
(
tmp
,
G0
,
Assembler
::
notEqual
,
Assembler
::
pt
,
not_already_dirty
);
__
cmp_and_br_short
(
tmp
,
G0
,
Assembler
::
notEqual
,
Assembler
::
pt
,
not_already_dirty
);
__
bind
(
young_card
);
// We didn't take the branch, so we're already dirty: return.
// We didn't take the branch, so we're already dirty: return.
// Use return-from-leaf
// Use return-from-leaf
__
retl
();
__
retl
();
...
...
src/cpu/sparc/vm/macroAssembler_sparc.cpp
浏览文件 @
869749fb
...
@@ -3752,7 +3752,7 @@ static void generate_dirty_card_log_enqueue(jbyte* byte_map_base) {
...
@@ -3752,7 +3752,7 @@ static void generate_dirty_card_log_enqueue(jbyte* byte_map_base) {
#define __ masm.
#define __ masm.
address
start
=
__
pc
();
address
start
=
__
pc
();
Label
not_already_dirty
,
restart
,
refill
;
Label
not_already_dirty
,
restart
,
refill
,
young_card
;
#ifdef _LP64
#ifdef _LP64
__
srlx
(
O0
,
CardTableModRefBS
::
card_shift
,
O0
);
__
srlx
(
O0
,
CardTableModRefBS
::
card_shift
,
O0
);
...
@@ -3763,9 +3763,15 @@ static void generate_dirty_card_log_enqueue(jbyte* byte_map_base) {
...
@@ -3763,9 +3763,15 @@ static void generate_dirty_card_log_enqueue(jbyte* byte_map_base) {
__
set
(
addrlit
,
O1
);
// O1 := <card table base>
__
set
(
addrlit
,
O1
);
// O1 := <card table base>
__
ldub
(
O0
,
O1
,
O2
);
// O2 := [O0 + O1]
__
ldub
(
O0
,
O1
,
O2
);
// O2 := [O0 + O1]
__
cmp_and_br_short
(
O2
,
G1SATBCardTableModRefBS
::
g1_young_card_val
(),
Assembler
::
equal
,
Assembler
::
pt
,
young_card
);
__
membar
(
Assembler
::
Membar_mask_bits
(
Assembler
::
StoreLoad
));
__
ldub
(
O0
,
O1
,
O2
);
// O2 := [O0 + O1]
assert
(
CardTableModRefBS
::
dirty_card_val
()
==
0
,
"otherwise check this code"
);
assert
(
CardTableModRefBS
::
dirty_card_val
()
==
0
,
"otherwise check this code"
);
__
cmp_and_br_short
(
O2
,
G0
,
Assembler
::
notEqual
,
Assembler
::
pt
,
not_already_dirty
);
__
cmp_and_br_short
(
O2
,
G0
,
Assembler
::
notEqual
,
Assembler
::
pt
,
not_already_dirty
);
__
bind
(
young_card
);
// We didn't take the branch, so we're already dirty: return.
// We didn't take the branch, so we're already dirty: return.
// Use return-from-leaf
// Use return-from-leaf
__
retl
();
__
retl
();
...
...
src/cpu/x86/vm/c1_Runtime1_x86.cpp
浏览文件 @
869749fb
...
@@ -38,6 +38,9 @@
...
@@ -38,6 +38,9 @@
#include "runtime/vframeArray.hpp"
#include "runtime/vframeArray.hpp"
#include "utilities/macros.hpp"
#include "utilities/macros.hpp"
#include "vmreg_x86.inline.hpp"
#include "vmreg_x86.inline.hpp"
#if INCLUDE_ALL_GCS
#include "gc_implementation/g1/g1SATBCardTableModRefBS.hpp"
#endif
// Implementation of StubAssembler
// Implementation of StubAssembler
...
@@ -1753,13 +1756,17 @@ OopMapSet* Runtime1::generate_code_for(StubID id, StubAssembler* sasm) {
...
@@ -1753,13 +1756,17 @@ OopMapSet* Runtime1::generate_code_for(StubID id, StubAssembler* sasm) {
__
leal
(
card_addr
,
__
as_Address
(
ArrayAddress
(
cardtable
,
index
)));
__
leal
(
card_addr
,
__
as_Address
(
ArrayAddress
(
cardtable
,
index
)));
#endif
#endif
__
cmpb
(
Address
(
card_addr
,
0
),
0
);
__
cmpb
(
Address
(
card_addr
,
0
),
(
int
)
G1SATBCardTableModRefBS
::
g1_young_card_val
());
__
jcc
(
Assembler
::
equal
,
done
);
__
membar
(
Assembler
::
Membar_mask_bits
(
Assembler
::
StoreLoad
));
__
cmpb
(
Address
(
card_addr
,
0
),
(
int
)
CardTableModRefBS
::
dirty_card_val
());
__
jcc
(
Assembler
::
equal
,
done
);
__
jcc
(
Assembler
::
equal
,
done
);
// storing region crossing non-NULL, card is clean.
// storing region crossing non-NULL, card is clean.
// dirty card and log.
// dirty card and log.
__
movb
(
Address
(
card_addr
,
0
),
0
);
__
movb
(
Address
(
card_addr
,
0
),
(
int
)
CardTableModRefBS
::
dirty_card_val
()
);
__
cmpl
(
queue_index
,
0
);
__
cmpl
(
queue_index
,
0
);
__
jcc
(
Assembler
::
equal
,
runtime
);
__
jcc
(
Assembler
::
equal
,
runtime
);
...
...
src/cpu/x86/vm/macroAssembler_x86.cpp
浏览文件 @
869749fb
...
@@ -3389,13 +3389,18 @@ void MacroAssembler::g1_write_barrier_post(Register store_addr,
...
@@ -3389,13 +3389,18 @@ void MacroAssembler::g1_write_barrier_post(Register store_addr,
const
Register
card_addr
=
tmp
;
const
Register
card_addr
=
tmp
;
lea
(
card_addr
,
as_Address
(
ArrayAddress
(
cardtable
,
index
)));
lea
(
card_addr
,
as_Address
(
ArrayAddress
(
cardtable
,
index
)));
#endif
#endif
cmpb
(
Address
(
card_addr
,
0
),
0
);
cmpb
(
Address
(
card_addr
,
0
),
(
int
)
G1SATBCardTableModRefBS
::
g1_young_card_val
()
);
jcc
(
Assembler
::
equal
,
done
);
jcc
(
Assembler
::
equal
,
done
);
membar
(
Assembler
::
Membar_mask_bits
(
Assembler
::
StoreLoad
));
cmpb
(
Address
(
card_addr
,
0
),
(
int
)
CardTableModRefBS
::
dirty_card_val
());
jcc
(
Assembler
::
equal
,
done
);
// storing a region crossing, non-NULL oop, card is clean.
// storing a region crossing, non-NULL oop, card is clean.
// dirty card and log.
// dirty card and log.
movb
(
Address
(
card_addr
,
0
),
0
);
movb
(
Address
(
card_addr
,
0
),
(
int
)
CardTableModRefBS
::
dirty_card_val
()
);
cmpl
(
queue_index
,
0
);
cmpl
(
queue_index
,
0
);
jcc
(
Assembler
::
equal
,
runtime
);
jcc
(
Assembler
::
equal
,
runtime
);
...
...
src/os/linux/vm/globals_linux.hpp
浏览文件 @
869749fb
...
@@ -53,7 +53,7 @@
...
@@ -53,7 +53,7 @@
// Defines Linux-specific default values. The flags are available on all
// Defines Linux-specific default values. The flags are available on all
// platforms, but they may have different default values on other platforms.
// platforms, but they may have different default values on other platforms.
//
//
define_pd_global
(
bool
,
UseLargePages
,
tru
e
);
define_pd_global
(
bool
,
UseLargePages
,
fals
e
);
define_pd_global
(
bool
,
UseLargePagesIndividualAllocation
,
false
);
define_pd_global
(
bool
,
UseLargePagesIndividualAllocation
,
false
);
define_pd_global
(
bool
,
UseOSErrorReporting
,
false
);
define_pd_global
(
bool
,
UseOSErrorReporting
,
false
);
define_pd_global
(
bool
,
UseThreadPriorities
,
true
)
;
define_pd_global
(
bool
,
UseThreadPriorities
,
true
)
;
...
...
src/os/linux/vm/os_linux.cpp
浏览文件 @
869749fb
...
@@ -3361,13 +3361,15 @@ bool os::Linux::setup_large_page_type(size_t page_size) {
...
@@ -3361,13 +3361,15 @@ bool os::Linux::setup_large_page_type(size_t page_size) {
if
(
FLAG_IS_DEFAULT
(
UseHugeTLBFS
)
&&
if
(
FLAG_IS_DEFAULT
(
UseHugeTLBFS
)
&&
FLAG_IS_DEFAULT
(
UseSHM
)
&&
FLAG_IS_DEFAULT
(
UseSHM
)
&&
FLAG_IS_DEFAULT
(
UseTransparentHugePages
))
{
FLAG_IS_DEFAULT
(
UseTransparentHugePages
))
{
// If UseLargePages is specified on the command line try all methods,
// if it's default, then try only UseTransparentHugePages.
// The type of large pages has not been specified by the user.
if
(
FLAG_IS_DEFAULT
(
UseLargePages
))
{
UseTransparentHugePages
=
true
;
// Try UseHugeTLBFS and then UseSHM.
}
else
{
UseHugeTLBFS
=
UseSHM
=
true
;
UseHugeTLBFS
=
UseTransparentHugePages
=
UseSHM
=
true
;
}
// Don't try UseTransparentHugePages since there are known
// performance issues with it turned on. This might change in the future.
UseTransparentHugePages
=
false
;
}
}
if
(
UseTransparentHugePages
)
{
if
(
UseTransparentHugePages
)
{
...
@@ -3393,9 +3395,19 @@ bool os::Linux::setup_large_page_type(size_t page_size) {
...
@@ -3393,9 +3395,19 @@ bool os::Linux::setup_large_page_type(size_t page_size) {
}
}
void
os
::
large_page_init
()
{
void
os
::
large_page_init
()
{
if
(
!
UseLargePages
)
{
if
(
!
UseLargePages
&&
UseHugeTLBFS
=
false
;
!
UseTransparentHugePages
&&
!
UseHugeTLBFS
&&
!
UseSHM
)
{
// Not using large pages.
return
;
}
if
(
!
FLAG_IS_DEFAULT
(
UseLargePages
)
&&
!
UseLargePages
)
{
// The user explicitly turned off large pages.
// Ignore the rest of the large pages flags.
UseTransparentHugePages
=
false
;
UseTransparentHugePages
=
false
;
UseHugeTLBFS
=
false
;
UseSHM
=
false
;
UseSHM
=
false
;
return
;
return
;
}
}
...
...
src/share/vm/classfile/dictionary.cpp
浏览文件 @
869749fb
/*
/*
* Copyright (c) 2003, 201
2
, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2003, 201
3
, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
*
* This code is free software; you can redistribute it and/or modify it
* This code is free software; you can redistribute it and/or modify it
...
@@ -25,6 +25,7 @@
...
@@ -25,6 +25,7 @@
#include "precompiled.hpp"
#include "precompiled.hpp"
#include "classfile/dictionary.hpp"
#include "classfile/dictionary.hpp"
#include "classfile/systemDictionary.hpp"
#include "classfile/systemDictionary.hpp"
#include "memory/iterator.hpp"
#include "oops/oop.inline.hpp"
#include "oops/oop.inline.hpp"
#include "prims/jvmtiRedefineClassesTrace.hpp"
#include "prims/jvmtiRedefineClassesTrace.hpp"
#include "utilities/hashtable.inline.hpp"
#include "utilities/hashtable.inline.hpp"
...
@@ -38,17 +39,21 @@ Dictionary::Dictionary(int table_size)
...
@@ -38,17 +39,21 @@ Dictionary::Dictionary(int table_size)
:
TwoOopHashtable
<
Klass
*
,
mtClass
>
(
table_size
,
sizeof
(
DictionaryEntry
))
{
:
TwoOopHashtable
<
Klass
*
,
mtClass
>
(
table_size
,
sizeof
(
DictionaryEntry
))
{
_current_class_index
=
0
;
_current_class_index
=
0
;
_current_class_entry
=
NULL
;
_current_class_entry
=
NULL
;
_pd_cache_table
=
new
ProtectionDomainCacheTable
(
defaultProtectionDomainCacheSize
);
};
};
Dictionary
::
Dictionary
(
int
table_size
,
HashtableBucket
<
mtClass
>*
t
,
Dictionary
::
Dictionary
(
int
table_size
,
HashtableBucket
<
mtClass
>*
t
,
int
number_of_entries
)
int
number_of_entries
)
:
TwoOopHashtable
<
Klass
*
,
mtClass
>
(
table_size
,
sizeof
(
DictionaryEntry
),
t
,
number_of_entries
)
{
:
TwoOopHashtable
<
Klass
*
,
mtClass
>
(
table_size
,
sizeof
(
DictionaryEntry
),
t
,
number_of_entries
)
{
_current_class_index
=
0
;
_current_class_index
=
0
;
_current_class_entry
=
NULL
;
_current_class_entry
=
NULL
;
_pd_cache_table
=
new
ProtectionDomainCacheTable
(
defaultProtectionDomainCacheSize
);
};
};
ProtectionDomainCacheEntry
*
Dictionary
::
cache_get
(
oop
protection_domain
)
{
return
_pd_cache_table
->
get
(
protection_domain
);
}
DictionaryEntry
*
Dictionary
::
new_entry
(
unsigned
int
hash
,
Klass
*
klass
,
DictionaryEntry
*
Dictionary
::
new_entry
(
unsigned
int
hash
,
Klass
*
klass
,
ClassLoaderData
*
loader_data
)
{
ClassLoaderData
*
loader_data
)
{
...
@@ -105,11 +110,12 @@ bool DictionaryEntry::contains_protection_domain(oop protection_domain) const {
...
@@ -105,11 +110,12 @@ bool DictionaryEntry::contains_protection_domain(oop protection_domain) const {
}
}
void
DictionaryEntry
::
add_protection_domain
(
oop
protection_domain
)
{
void
DictionaryEntry
::
add_protection_domain
(
Dictionary
*
dict
,
oop
protection_domain
)
{
assert_locked_or_safepoint
(
SystemDictionary_lock
);
assert_locked_or_safepoint
(
SystemDictionary_lock
);
if
(
!
contains_protection_domain
(
protection_domain
))
{
if
(
!
contains_protection_domain
(
protection_domain
))
{
ProtectionDomainCacheEntry
*
entry
=
dict
->
cache_get
(
protection_domain
);
ProtectionDomainEntry
*
new_head
=
ProtectionDomainEntry
*
new_head
=
new
ProtectionDomainEntry
(
protection_domain
,
_pd_set
);
new
ProtectionDomainEntry
(
entry
,
_pd_set
);
// Warning: Preserve store ordering. The SystemDictionary is read
// Warning: Preserve store ordering. The SystemDictionary is read
// without locks. The new ProtectionDomainEntry must be
// without locks. The new ProtectionDomainEntry must be
// complete before other threads can be allowed to see it
// complete before other threads can be allowed to see it
...
@@ -193,7 +199,10 @@ bool Dictionary::do_unloading() {
...
@@ -193,7 +199,10 @@ bool Dictionary::do_unloading() {
void
Dictionary
::
always_strong_oops_do
(
OopClosure
*
blk
)
{
void
Dictionary
::
always_strong_oops_do
(
OopClosure
*
blk
)
{
// Follow all system classes and temporary placeholders in dictionary
// Follow all system classes and temporary placeholders in dictionary; only
// protection domain oops contain references into the heap. In a first
// pass over the system dictionary determine which need to be treated as
// strongly reachable and mark them as such.
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
DictionaryEntry
*
probe
=
bucket
(
index
);
for
(
DictionaryEntry
*
probe
=
bucket
(
index
);
probe
!=
NULL
;
probe
!=
NULL
;
...
@@ -201,10 +210,13 @@ void Dictionary::always_strong_oops_do(OopClosure* blk) {
...
@@ -201,10 +210,13 @@ void Dictionary::always_strong_oops_do(OopClosure* blk) {
Klass
*
e
=
probe
->
klass
();
Klass
*
e
=
probe
->
klass
();
ClassLoaderData
*
loader_data
=
probe
->
loader_data
();
ClassLoaderData
*
loader_data
=
probe
->
loader_data
();
if
(
is_strongly_reachable
(
loader_data
,
e
))
{
if
(
is_strongly_reachable
(
loader_data
,
e
))
{
probe
->
protection_domain_set_oops_do
(
blk
);
probe
->
set_strongly_reachable
(
);
}
}
}
}
}
}
// Then iterate over the protection domain cache to apply the closure on the
// previously marked ones.
_pd_cache_table
->
always_strong_oops_do
(
blk
);
}
}
...
@@ -266,18 +278,12 @@ void Dictionary::classes_do(void f(Klass*, ClassLoaderData*)) {
...
@@ -266,18 +278,12 @@ void Dictionary::classes_do(void f(Klass*, ClassLoaderData*)) {
}
}
}
}
void
Dictionary
::
oops_do
(
OopClosure
*
f
)
{
void
Dictionary
::
oops_do
(
OopClosure
*
f
)
{
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
// Only the protection domain oops contain references into the heap. Iterate
for
(
DictionaryEntry
*
probe
=
bucket
(
index
);
// over all of them.
probe
!=
NULL
;
_pd_cache_table
->
oops_do
(
f
);
probe
=
probe
->
next
())
{
probe
->
protection_domain_set_oops_do
(
f
);
}
}
}
}
void
Dictionary
::
methods_do
(
void
f
(
Method
*
))
{
void
Dictionary
::
methods_do
(
void
f
(
Method
*
))
{
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
DictionaryEntry
*
probe
=
bucket
(
index
);
for
(
DictionaryEntry
*
probe
=
bucket
(
index
);
...
@@ -292,6 +298,11 @@ void Dictionary::methods_do(void f(Method*)) {
...
@@ -292,6 +298,11 @@ void Dictionary::methods_do(void f(Method*)) {
}
}
}
}
void
Dictionary
::
unlink
(
BoolObjectClosure
*
is_alive
)
{
// Only the protection domain cache table may contain references to the heap
// that need to be unlinked.
_pd_cache_table
->
unlink
(
is_alive
);
}
Klass
*
Dictionary
::
try_get_next_class
()
{
Klass
*
Dictionary
::
try_get_next_class
()
{
while
(
true
)
{
while
(
true
)
{
...
@@ -306,7 +317,6 @@ Klass* Dictionary::try_get_next_class() {
...
@@ -306,7 +317,6 @@ Klass* Dictionary::try_get_next_class() {
// never reached
// never reached
}
}
// Add a loaded class to the system dictionary.
// Add a loaded class to the system dictionary.
// Readers of the SystemDictionary aren't always locked, so _buckets
// Readers of the SystemDictionary aren't always locked, so _buckets
// is volatile. The store of the next field in the constructor is
// is volatile. The store of the next field in the constructor is
...
@@ -396,7 +406,7 @@ void Dictionary::add_protection_domain(int index, unsigned int hash,
...
@@ -396,7 +406,7 @@ void Dictionary::add_protection_domain(int index, unsigned int hash,
assert
(
protection_domain
()
!=
NULL
,
assert
(
protection_domain
()
!=
NULL
,
"real protection domain should be present"
);
"real protection domain should be present"
);
entry
->
add_protection_domain
(
protection_domain
());
entry
->
add_protection_domain
(
this
,
protection_domain
());
assert
(
entry
->
contains_protection_domain
(
protection_domain
()),
assert
(
entry
->
contains_protection_domain
(
protection_domain
()),
"now protection domain should be present"
);
"now protection domain should be present"
);
...
@@ -446,6 +456,146 @@ void Dictionary::reorder_dictionary() {
...
@@ -446,6 +456,146 @@ void Dictionary::reorder_dictionary() {
}
}
}
}
ProtectionDomainCacheTable
::
ProtectionDomainCacheTable
(
int
table_size
)
:
Hashtable
<
oop
,
mtClass
>
(
table_size
,
sizeof
(
ProtectionDomainCacheEntry
))
{
}
void
ProtectionDomainCacheTable
::
unlink
(
BoolObjectClosure
*
is_alive
)
{
assert
(
SafepointSynchronize
::
is_at_safepoint
(),
"must be"
);
for
(
int
i
=
0
;
i
<
table_size
();
++
i
)
{
ProtectionDomainCacheEntry
**
p
=
bucket_addr
(
i
);
ProtectionDomainCacheEntry
*
entry
=
bucket
(
i
);
while
(
entry
!=
NULL
)
{
if
(
is_alive
->
do_object_b
(
entry
->
literal
()))
{
p
=
entry
->
next_addr
();
}
else
{
*
p
=
entry
->
next
();
free_entry
(
entry
);
}
entry
=
*
p
;
}
}
}
void
ProtectionDomainCacheTable
::
oops_do
(
OopClosure
*
f
)
{
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
ProtectionDomainCacheEntry
*
probe
=
bucket
(
index
);
probe
!=
NULL
;
probe
=
probe
->
next
())
{
probe
->
oops_do
(
f
);
}
}
}
uint
ProtectionDomainCacheTable
::
bucket_size
()
{
return
sizeof
(
ProtectionDomainCacheEntry
);
}
#ifndef PRODUCT
void
ProtectionDomainCacheTable
::
print
()
{
tty
->
print_cr
(
"Protection domain cache table (table_size=%d, classes=%d)"
,
table_size
(),
number_of_entries
());
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
ProtectionDomainCacheEntry
*
probe
=
bucket
(
index
);
probe
!=
NULL
;
probe
=
probe
->
next
())
{
probe
->
print
();
}
}
}
void
ProtectionDomainCacheEntry
::
print
()
{
tty
->
print_cr
(
"entry "
PTR_FORMAT
" value "
PTR_FORMAT
" strongly_reachable %d next "
PTR_FORMAT
,
this
,
(
void
*
)
literal
(),
_strongly_reachable
,
next
());
}
#endif
void
ProtectionDomainCacheTable
::
verify
()
{
int
element_count
=
0
;
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
ProtectionDomainCacheEntry
*
probe
=
bucket
(
index
);
probe
!=
NULL
;
probe
=
probe
->
next
())
{
probe
->
verify
();
element_count
++
;
}
}
guarantee
(
number_of_entries
()
==
element_count
,
"Verify of protection domain cache table failed"
);
debug_only
(
verify_lookup_length
((
double
)
number_of_entries
()
/
table_size
()));
}
void
ProtectionDomainCacheEntry
::
verify
()
{
guarantee
(
literal
()
->
is_oop
(),
"must be an oop"
);
}
void
ProtectionDomainCacheTable
::
always_strong_oops_do
(
OopClosure
*
f
)
{
// the caller marked the protection domain cache entries that we need to apply
// the closure on. Only process them.
for
(
int
index
=
0
;
index
<
table_size
();
index
++
)
{
for
(
ProtectionDomainCacheEntry
*
probe
=
bucket
(
index
);
probe
!=
NULL
;
probe
=
probe
->
next
())
{
if
(
probe
->
is_strongly_reachable
())
{
probe
->
reset_strongly_reachable
();
probe
->
oops_do
(
f
);
}
}
}
}
ProtectionDomainCacheEntry
*
ProtectionDomainCacheTable
::
get
(
oop
protection_domain
)
{
unsigned
int
hash
=
compute_hash
(
protection_domain
);
int
index
=
hash_to_index
(
hash
);
ProtectionDomainCacheEntry
*
entry
=
find_entry
(
index
,
protection_domain
);
if
(
entry
==
NULL
)
{
entry
=
add_entry
(
index
,
hash
,
protection_domain
);
}
return
entry
;
}
ProtectionDomainCacheEntry
*
ProtectionDomainCacheTable
::
find_entry
(
int
index
,
oop
protection_domain
)
{
for
(
ProtectionDomainCacheEntry
*
e
=
bucket
(
index
);
e
!=
NULL
;
e
=
e
->
next
())
{
if
(
e
->
protection_domain
()
==
protection_domain
)
{
return
e
;
}
}
return
NULL
;
}
ProtectionDomainCacheEntry
*
ProtectionDomainCacheTable
::
add_entry
(
int
index
,
unsigned
int
hash
,
oop
protection_domain
)
{
assert_locked_or_safepoint
(
SystemDictionary_lock
);
assert
(
index
==
index_for
(
protection_domain
),
"incorrect index?"
);
assert
(
find_entry
(
index
,
protection_domain
)
==
NULL
,
"no double entry"
);
ProtectionDomainCacheEntry
*
p
=
new_entry
(
hash
,
protection_domain
);
Hashtable
<
oop
,
mtClass
>::
add_entry
(
index
,
p
);
return
p
;
}
void
ProtectionDomainCacheTable
::
free
(
ProtectionDomainCacheEntry
*
to_delete
)
{
unsigned
int
hash
=
compute_hash
(
to_delete
->
protection_domain
());
int
index
=
hash_to_index
(
hash
);
ProtectionDomainCacheEntry
**
p
=
bucket_addr
(
index
);
ProtectionDomainCacheEntry
*
entry
=
bucket
(
index
);
while
(
true
)
{
assert
(
entry
!=
NULL
,
"sanity"
);
if
(
entry
==
to_delete
)
{
*
p
=
entry
->
next
();
Hashtable
<
oop
,
mtClass
>::
free_entry
(
entry
);
break
;
}
else
{
p
=
entry
->
next_addr
();
entry
=
*
p
;
}
}
}
SymbolPropertyTable
::
SymbolPropertyTable
(
int
table_size
)
SymbolPropertyTable
::
SymbolPropertyTable
(
int
table_size
)
:
Hashtable
<
Symbol
*
,
mtSymbol
>
(
table_size
,
sizeof
(
SymbolPropertyEntry
))
:
Hashtable
<
Symbol
*
,
mtSymbol
>
(
table_size
,
sizeof
(
SymbolPropertyEntry
))
{
{
...
@@ -532,11 +682,13 @@ void Dictionary::print() {
...
@@ -532,11 +682,13 @@ void Dictionary::print() {
tty
->
cr
();
tty
->
cr
();
}
}
}
}
tty
->
cr
();
_pd_cache_table
->
print
();
tty
->
cr
();
}
}
#endif
#endif
void
Dictionary
::
verify
()
{
void
Dictionary
::
verify
()
{
guarantee
(
number_of_entries
()
>=
0
,
"Verify of system dictionary failed"
);
guarantee
(
number_of_entries
()
>=
0
,
"Verify of system dictionary failed"
);
...
@@ -563,5 +715,7 @@ void Dictionary::verify() {
...
@@ -563,5 +715,7 @@ void Dictionary::verify() {
guarantee
(
number_of_entries
()
==
element_count
,
guarantee
(
number_of_entries
()
==
element_count
,
"Verify of system dictionary failed"
);
"Verify of system dictionary failed"
);
debug_only
(
verify_lookup_length
((
double
)
number_of_entries
()
/
table_size
()));
debug_only
(
verify_lookup_length
((
double
)
number_of_entries
()
/
table_size
()));
_pd_cache_table
->
verify
();
}
}
src/share/vm/classfile/dictionary.hpp
浏览文件 @
869749fb
...
@@ -27,11 +27,14 @@
...
@@ -27,11 +27,14 @@
#include "classfile/systemDictionary.hpp"
#include "classfile/systemDictionary.hpp"
#include "oops/instanceKlass.hpp"
#include "oops/instanceKlass.hpp"
#include "oops/oop.hpp"
#include "oops/oop.
inline.
hpp"
#include "utilities/hashtable.hpp"
#include "utilities/hashtable.hpp"
class
DictionaryEntry
;
class
DictionaryEntry
;
class
PSPromotionManager
;
class
PSPromotionManager
;
class
ProtectionDomainCacheTable
;
class
ProtectionDomainCacheEntry
;
class
BoolObjectClosure
;
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// The data structure for the system dictionary (and the shared system
// The data structure for the system dictionary (and the shared system
...
@@ -45,6 +48,8 @@ private:
...
@@ -45,6 +48,8 @@ private:
// pointer to the current hash table entry.
// pointer to the current hash table entry.
static
DictionaryEntry
*
_current_class_entry
;
static
DictionaryEntry
*
_current_class_entry
;
ProtectionDomainCacheTable
*
_pd_cache_table
;
DictionaryEntry
*
get_entry
(
int
index
,
unsigned
int
hash
,
DictionaryEntry
*
get_entry
(
int
index
,
unsigned
int
hash
,
Symbol
*
name
,
ClassLoaderData
*
loader_data
);
Symbol
*
name
,
ClassLoaderData
*
loader_data
);
...
@@ -93,6 +98,7 @@ public:
...
@@ -93,6 +98,7 @@ public:
void
methods_do
(
void
f
(
Method
*
));
void
methods_do
(
void
f
(
Method
*
));
void
unlink
(
BoolObjectClosure
*
is_alive
);
// Classes loaded by the bootstrap loader are always strongly reachable.
// Classes loaded by the bootstrap loader are always strongly reachable.
// If we're not doing class unloading, all classes are strongly reachable.
// If we're not doing class unloading, all classes are strongly reachable.
...
@@ -118,6 +124,7 @@ public:
...
@@ -118,6 +124,7 @@ public:
// Sharing support
// Sharing support
void
reorder_dictionary
();
void
reorder_dictionary
();
ProtectionDomainCacheEntry
*
cache_get
(
oop
protection_domain
);
#ifndef PRODUCT
#ifndef PRODUCT
void
print
();
void
print
();
...
@@ -126,21 +133,112 @@ public:
...
@@ -126,21 +133,112 @@ public:
};
};
// The following classes can be in dictionary.cpp, but we need these
// The following classes can be in dictionary.cpp, but we need these
// to be in header file so that SA's vmStructs can access.
// to be in header file so that SA's vmStructs can access them.
class
ProtectionDomainCacheEntry
:
public
HashtableEntry
<
oop
,
mtClass
>
{
friend
class
VMStructs
;
private:
// Flag indicating whether this protection domain entry is strongly reachable.
// Used during iterating over the system dictionary to remember oops that need
// to be updated.
bool
_strongly_reachable
;
public:
oop
protection_domain
()
{
return
literal
();
}
void
init
()
{
_strongly_reachable
=
false
;
}
ProtectionDomainCacheEntry
*
next
()
{
return
(
ProtectionDomainCacheEntry
*
)
HashtableEntry
<
oop
,
mtClass
>::
next
();
}
ProtectionDomainCacheEntry
**
next_addr
()
{
return
(
ProtectionDomainCacheEntry
**
)
HashtableEntry
<
oop
,
mtClass
>::
next_addr
();
}
void
oops_do
(
OopClosure
*
f
)
{
f
->
do_oop
(
literal_addr
());
}
void
set_strongly_reachable
()
{
_strongly_reachable
=
true
;
}
bool
is_strongly_reachable
()
{
return
_strongly_reachable
;
}
void
reset_strongly_reachable
()
{
_strongly_reachable
=
false
;
}
void
print
()
PRODUCT_RETURN
;
void
verify
();
};
// The ProtectionDomainCacheTable contains all protection domain oops. The system
// dictionary entries reference its entries instead of having references to oops
// directly.
// This is used to speed up system dictionary iteration: the oops in the
// protection domain are the only ones referring the Java heap. So when there is
// need to update these, instead of going over every entry of the system dictionary,
// we only need to iterate over this set.
// The amount of different protection domains used is typically magnitudes smaller
// than the number of system dictionary entries (loaded classes).
class
ProtectionDomainCacheTable
:
public
Hashtable
<
oop
,
mtClass
>
{
friend
class
VMStructs
;
private:
ProtectionDomainCacheEntry
*
bucket
(
int
i
)
{
return
(
ProtectionDomainCacheEntry
*
)
Hashtable
<
oop
,
mtClass
>::
bucket
(
i
);
}
// The following method is not MT-safe and must be done under lock.
ProtectionDomainCacheEntry
**
bucket_addr
(
int
i
)
{
return
(
ProtectionDomainCacheEntry
**
)
Hashtable
<
oop
,
mtClass
>::
bucket_addr
(
i
);
}
ProtectionDomainCacheEntry
*
new_entry
(
unsigned
int
hash
,
oop
protection_domain
)
{
ProtectionDomainCacheEntry
*
entry
=
(
ProtectionDomainCacheEntry
*
)
Hashtable
<
oop
,
mtClass
>::
new_entry
(
hash
,
protection_domain
);
entry
->
init
();
return
entry
;
}
static
unsigned
int
compute_hash
(
oop
protection_domain
)
{
return
(
unsigned
int
)(
protection_domain
->
identity_hash
());
}
int
index_for
(
oop
protection_domain
)
{
return
hash_to_index
(
compute_hash
(
protection_domain
));
}
ProtectionDomainCacheEntry
*
add_entry
(
int
index
,
unsigned
int
hash
,
oop
protection_domain
);
ProtectionDomainCacheEntry
*
find_entry
(
int
index
,
oop
protection_domain
);
public:
ProtectionDomainCacheTable
(
int
table_size
);
ProtectionDomainCacheEntry
*
get
(
oop
protection_domain
);
void
free
(
ProtectionDomainCacheEntry
*
entry
);
void
unlink
(
BoolObjectClosure
*
cl
);
// GC support
void
oops_do
(
OopClosure
*
f
);
void
always_strong_oops_do
(
OopClosure
*
f
);
static
uint
bucket_size
();
void
print
()
PRODUCT_RETURN
;
void
verify
();
};
class
ProtectionDomainEntry
:
public
CHeapObj
<
mtClass
>
{
class
ProtectionDomainEntry
:
public
CHeapObj
<
mtClass
>
{
friend
class
VMStructs
;
friend
class
VMStructs
;
public:
public:
ProtectionDomainEntry
*
_next
;
ProtectionDomainEntry
*
_next
;
oop
_protection_domain
;
ProtectionDomainCacheEntry
*
_pd_cache
;
ProtectionDomainEntry
(
oop
protection_domain
,
ProtectionDomainEntry
*
next
)
{
ProtectionDomainEntry
(
ProtectionDomainCacheEntry
*
pd_cache
,
ProtectionDomainEntry
*
next
)
{
_p
rotection_domain
=
protection_domain
;
_p
d_cache
=
pd_cache
;
_next
=
next
;
_next
=
next
;
}
}
ProtectionDomainEntry
*
next
()
{
return
_next
;
}
ProtectionDomainEntry
*
next
()
{
return
_next
;
}
oop
protection_domain
()
{
return
_p
rotection_domain
;
}
oop
protection_domain
()
{
return
_p
d_cache
->
protection_domain
()
;
}
};
};
// An entry in the system dictionary, this describes a class as
// An entry in the system dictionary, this describes a class as
...
@@ -151,6 +249,24 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
...
@@ -151,6 +249,24 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
private:
private:
// Contains the set of approved protection domains that can access
// Contains the set of approved protection domains that can access
// this system dictionary entry.
// this system dictionary entry.
//
// This protection domain set is a set of tuples:
//
// (InstanceKlass C, initiating class loader ICL, Protection Domain PD)
//
// [Note that C.protection_domain(), which is stored in the java.lang.Class
// mirror of C, is NOT the same as PD]
//
// If such an entry (C, ICL, PD) exists in the table, it means that
// it is okay for a class Foo to reference C, where
//
// Foo.protection_domain() == PD, and
// Foo's defining class loader == ICL
//
// The usage of the PD set can be seen in SystemDictionary::validate_protection_domain()
// It is essentially a cache to avoid repeated Java up-calls to
// ClassLoader.checkPackageAccess().
//
ProtectionDomainEntry
*
_pd_set
;
ProtectionDomainEntry
*
_pd_set
;
ClassLoaderData
*
_loader_data
;
ClassLoaderData
*
_loader_data
;
...
@@ -158,7 +274,7 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
...
@@ -158,7 +274,7 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
// Tells whether a protection is in the approved set.
// Tells whether a protection is in the approved set.
bool
contains_protection_domain
(
oop
protection_domain
)
const
;
bool
contains_protection_domain
(
oop
protection_domain
)
const
;
// Adds a protection domain to the approved set.
// Adds a protection domain to the approved set.
void
add_protection_domain
(
oop
protection_domain
);
void
add_protection_domain
(
Dictionary
*
dict
,
oop
protection_domain
);
Klass
*
klass
()
const
{
return
(
Klass
*
)
literal
();
}
Klass
*
klass
()
const
{
return
(
Klass
*
)
literal
();
}
Klass
**
klass_addr
()
{
return
(
Klass
**
)
literal_addr
();
}
Klass
**
klass_addr
()
{
return
(
Klass
**
)
literal_addr
();
}
...
@@ -189,12 +305,11 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
...
@@ -189,12 +305,11 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
:
contains_protection_domain
(
protection_domain
());
:
contains_protection_domain
(
protection_domain
());
}
}
void
set_strongly_reachable
()
{
void
protection_domain_set_oops_do
(
OopClosure
*
f
)
{
for
(
ProtectionDomainEntry
*
current
=
_pd_set
;
for
(
ProtectionDomainEntry
*
current
=
_pd_set
;
current
!=
NULL
;
current
!=
NULL
;
current
=
current
->
_next
)
{
current
=
current
->
_next
)
{
f
->
do_oop
(
&
(
current
->
_protection_domain
)
);
current
->
_pd_cache
->
set_strongly_reachable
(
);
}
}
}
}
...
@@ -202,7 +317,7 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
...
@@ -202,7 +317,7 @@ class DictionaryEntry : public HashtableEntry<Klass*, mtClass> {
for
(
ProtectionDomainEntry
*
current
=
_pd_set
;
for
(
ProtectionDomainEntry
*
current
=
_pd_set
;
current
!=
NULL
;
current
!=
NULL
;
current
=
current
->
_next
)
{
current
=
current
->
_next
)
{
current
->
_p
rotection_domain
->
verify
();
current
->
_p
d_cache
->
protection_domain
()
->
verify
();
}
}
}
}
...
...
src/share/vm/classfile/systemDictionary.cpp
浏览文件 @
869749fb
...
@@ -1697,6 +1697,24 @@ int SystemDictionary::calculate_systemdictionary_size(int classcount) {
...
@@ -1697,6 +1697,24 @@ int SystemDictionary::calculate_systemdictionary_size(int classcount) {
return
newsize
;
return
newsize
;
}
}
#ifdef ASSERT
class
VerifySDReachableAndLiveClosure
:
public
OopClosure
{
private:
BoolObjectClosure
*
_is_alive
;
template
<
class
T
>
void
do_oop_work
(
T
*
p
)
{
oop
obj
=
oopDesc
::
load_decode_heap_oop
(
p
);
guarantee
(
_is_alive
->
do_object_b
(
obj
),
"Oop in system dictionary must be live"
);
}
public:
VerifySDReachableAndLiveClosure
(
BoolObjectClosure
*
is_alive
)
:
OopClosure
(),
_is_alive
(
is_alive
)
{
}
virtual
void
do_oop
(
oop
*
p
)
{
do_oop_work
(
p
);
}
virtual
void
do_oop
(
narrowOop
*
p
)
{
do_oop_work
(
p
);
}
};
#endif
// Assumes classes in the SystemDictionary are only unloaded at a safepoint
// Assumes classes in the SystemDictionary are only unloaded at a safepoint
// Note: anonymous classes are not in the SD.
// Note: anonymous classes are not in the SD.
bool
SystemDictionary
::
do_unloading
(
BoolObjectClosure
*
is_alive
)
{
bool
SystemDictionary
::
do_unloading
(
BoolObjectClosure
*
is_alive
)
{
...
@@ -1707,7 +1725,15 @@ bool SystemDictionary::do_unloading(BoolObjectClosure* is_alive) {
...
@@ -1707,7 +1725,15 @@ bool SystemDictionary::do_unloading(BoolObjectClosure* is_alive) {
unloading_occurred
=
dictionary
()
->
do_unloading
();
unloading_occurred
=
dictionary
()
->
do_unloading
();
constraints
()
->
purge_loader_constraints
();
constraints
()
->
purge_loader_constraints
();
resolution_errors
()
->
purge_resolution_errors
();
resolution_errors
()
->
purge_resolution_errors
();
}
}
// Oops referenced by the system dictionary may get unreachable independently
// of the class loader (eg. cached protection domain oops). So we need to
// explicitly unlink them here instead of in Dictionary::do_unloading.
dictionary
()
->
unlink
(
is_alive
);
#ifdef ASSERT
VerifySDReachableAndLiveClosure
cl
(
is_alive
);
dictionary
()
->
oops_do
(
&
cl
);
#endif
return
unloading_occurred
;
return
unloading_occurred
;
}
}
...
...
src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp
浏览文件 @
869749fb
...
@@ -6035,7 +6035,11 @@ void G1CollectedHeap::verify_dirty_region(HeapRegion* hr) {
...
@@ -6035,7 +6035,11 @@ void G1CollectedHeap::verify_dirty_region(HeapRegion* hr) {
// is dirty.
// is dirty.
G1SATBCardTableModRefBS
*
ct_bs
=
g1_barrier_set
();
G1SATBCardTableModRefBS
*
ct_bs
=
g1_barrier_set
();
MemRegion
mr
(
hr
->
bottom
(),
hr
->
pre_dummy_top
());
MemRegion
mr
(
hr
->
bottom
(),
hr
->
pre_dummy_top
());
ct_bs
->
verify_dirty_region
(
mr
);
if
(
hr
->
is_young
())
{
ct_bs
->
verify_g1_young_region
(
mr
);
}
else
{
ct_bs
->
verify_dirty_region
(
mr
);
}
}
}
void
G1CollectedHeap
::
verify_dirty_young_list
(
HeapRegion
*
head
)
{
void
G1CollectedHeap
::
verify_dirty_young_list
(
HeapRegion
*
head
)
{
...
...
src/share/vm/gc_implementation/g1/g1CollectedHeap.inline.hpp
浏览文件 @
869749fb
...
@@ -29,6 +29,7 @@
...
@@ -29,6 +29,7 @@
#include "gc_implementation/g1/g1CollectedHeap.hpp"
#include "gc_implementation/g1/g1CollectedHeap.hpp"
#include "gc_implementation/g1/g1AllocRegion.inline.hpp"
#include "gc_implementation/g1/g1AllocRegion.inline.hpp"
#include "gc_implementation/g1/g1CollectorPolicy.hpp"
#include "gc_implementation/g1/g1CollectorPolicy.hpp"
#include "gc_implementation/g1/g1SATBCardTableModRefBS.hpp"
#include "gc_implementation/g1/heapRegionSeq.inline.hpp"
#include "gc_implementation/g1/heapRegionSeq.inline.hpp"
#include "utilities/taskqueue.hpp"
#include "utilities/taskqueue.hpp"
...
@@ -134,7 +135,7 @@ G1CollectedHeap::dirty_young_block(HeapWord* start, size_t word_size) {
...
@@ -134,7 +135,7 @@ G1CollectedHeap::dirty_young_block(HeapWord* start, size_t word_size) {
assert
(
containing_hr
->
is_in
(
end
-
1
),
"it should also contain end - 1"
);
assert
(
containing_hr
->
is_in
(
end
-
1
),
"it should also contain end - 1"
);
MemRegion
mr
(
start
,
end
);
MemRegion
mr
(
start
,
end
);
g1_barrier_set
()
->
dirty
(
mr
);
g1_barrier_set
()
->
g1_mark_as_young
(
mr
);
}
}
inline
RefToScanQueue
*
G1CollectedHeap
::
task_queue
(
int
i
)
const
{
inline
RefToScanQueue
*
G1CollectedHeap
::
task_queue
(
int
i
)
const
{
...
...
src/share/vm/gc_implementation/g1/g1CollectorPolicy.cpp
浏览文件 @
869749fb
...
@@ -319,10 +319,10 @@ G1CollectorPolicy::G1CollectorPolicy() :
...
@@ -319,10 +319,10 @@ G1CollectorPolicy::G1CollectorPolicy() :
}
}
void
G1CollectorPolicy
::
initialize_flags
()
{
void
G1CollectorPolicy
::
initialize_flags
()
{
set_min_alignment
(
HeapRegion
::
GrainBytes
)
;
_min_alignment
=
HeapRegion
::
GrainBytes
;
size_t
card_table_alignment
=
GenRemSet
::
max_alignment_constraint
(
rem_set_name
());
size_t
card_table_alignment
=
GenRemSet
::
max_alignment_constraint
(
rem_set_name
());
size_t
page_size
=
UseLargePages
?
os
::
large_page_size
()
:
os
::
vm_page_size
();
size_t
page_size
=
UseLargePages
?
os
::
large_page_size
()
:
os
::
vm_page_size
();
set_max_alignment
(
MAX3
(
card_table_alignment
,
min_alignment
(),
page_size
)
);
_max_alignment
=
MAX3
(
card_table_alignment
,
_min_alignment
,
page_size
);
if
(
SurvivorRatio
<
1
)
{
if
(
SurvivorRatio
<
1
)
{
vm_exit_during_initialization
(
"Invalid survivor ratio specified"
);
vm_exit_during_initialization
(
"Invalid survivor ratio specified"
);
}
}
...
...
src/share/vm/gc_implementation/g1/g1SATBCardTableModRefBS.cpp
浏览文件 @
869749fb
...
@@ -70,6 +70,12 @@ bool G1SATBCardTableModRefBS::mark_card_deferred(size_t card_index) {
...
@@ -70,6 +70,12 @@ bool G1SATBCardTableModRefBS::mark_card_deferred(size_t card_index) {
if
((
val
&
(
clean_card_mask_val
()
|
deferred_card_val
()))
==
deferred_card_val
())
{
if
((
val
&
(
clean_card_mask_val
()
|
deferred_card_val
()))
==
deferred_card_val
())
{
return
false
;
return
false
;
}
}
if
(
val
==
g1_young_gen
)
{
// the card is for a young gen region. We don't need to keep track of all pointers into young
return
false
;
}
// Cached bit can be installed either on a clean card or on a claimed card.
// Cached bit can be installed either on a clean card or on a claimed card.
jbyte
new_val
=
val
;
jbyte
new_val
=
val
;
if
(
val
==
clean_card_val
())
{
if
(
val
==
clean_card_val
())
{
...
@@ -85,6 +91,19 @@ bool G1SATBCardTableModRefBS::mark_card_deferred(size_t card_index) {
...
@@ -85,6 +91,19 @@ bool G1SATBCardTableModRefBS::mark_card_deferred(size_t card_index) {
return
true
;
return
true
;
}
}
void
G1SATBCardTableModRefBS
::
g1_mark_as_young
(
const
MemRegion
&
mr
)
{
jbyte
*
const
first
=
byte_for
(
mr
.
start
());
jbyte
*
const
last
=
byte_after
(
mr
.
last
());
memset
(
first
,
g1_young_gen
,
last
-
first
);
}
#ifndef PRODUCT
void
G1SATBCardTableModRefBS
::
verify_g1_young_region
(
MemRegion
mr
)
{
verify_region
(
mr
,
g1_young_gen
,
true
);
}
#endif
G1SATBCardTableLoggingModRefBS
::
G1SATBCardTableLoggingModRefBS
::
G1SATBCardTableLoggingModRefBS
(
MemRegion
whole_heap
,
G1SATBCardTableLoggingModRefBS
(
MemRegion
whole_heap
,
int
max_covered_regions
)
:
int
max_covered_regions
)
:
...
@@ -97,7 +116,11 @@ G1SATBCardTableLoggingModRefBS(MemRegion whole_heap,
...
@@ -97,7 +116,11 @@ G1SATBCardTableLoggingModRefBS(MemRegion whole_heap,
void
void
G1SATBCardTableLoggingModRefBS
::
write_ref_field_work
(
void
*
field
,
G1SATBCardTableLoggingModRefBS
::
write_ref_field_work
(
void
*
field
,
oop
new_val
)
{
oop
new_val
)
{
jbyte
*
byte
=
byte_for
(
field
);
volatile
jbyte
*
byte
=
byte_for
(
field
);
if
(
*
byte
==
g1_young_gen
)
{
return
;
}
OrderAccess
::
storeload
();
if
(
*
byte
!=
dirty_card
)
{
if
(
*
byte
!=
dirty_card
)
{
*
byte
=
dirty_card
;
*
byte
=
dirty_card
;
Thread
*
thr
=
Thread
::
current
();
Thread
*
thr
=
Thread
::
current
();
...
@@ -129,7 +152,7 @@ G1SATBCardTableLoggingModRefBS::write_ref_field_static(void* field,
...
@@ -129,7 +152,7 @@ G1SATBCardTableLoggingModRefBS::write_ref_field_static(void* field,
void
void
G1SATBCardTableLoggingModRefBS
::
invalidate
(
MemRegion
mr
,
bool
whole_heap
)
{
G1SATBCardTableLoggingModRefBS
::
invalidate
(
MemRegion
mr
,
bool
whole_heap
)
{
jbyte
*
byte
=
byte_for
(
mr
.
start
());
volatile
jbyte
*
byte
=
byte_for
(
mr
.
start
());
jbyte
*
last_byte
=
byte_for
(
mr
.
last
());
jbyte
*
last_byte
=
byte_for
(
mr
.
last
());
Thread
*
thr
=
Thread
::
current
();
Thread
*
thr
=
Thread
::
current
();
if
(
whole_heap
)
{
if
(
whole_heap
)
{
...
@@ -138,25 +161,35 @@ G1SATBCardTableLoggingModRefBS::invalidate(MemRegion mr, bool whole_heap) {
...
@@ -138,25 +161,35 @@ G1SATBCardTableLoggingModRefBS::invalidate(MemRegion mr, bool whole_heap) {
byte
++
;
byte
++
;
}
}
}
else
{
}
else
{
// Enqueue if necessary.
// skip all consecutive young cards
if
(
thr
->
is_Java_thread
())
{
for
(;
byte
<=
last_byte
&&
*
byte
==
g1_young_gen
;
byte
++
);
JavaThread
*
jt
=
(
JavaThread
*
)
thr
;
while
(
byte
<=
last_byte
)
{
if
(
byte
<=
last_byte
)
{
if
(
*
byte
!=
dirty_card
)
{
OrderAccess
::
storeload
();
*
byte
=
dirty_card
;
// Enqueue if necessary.
jt
->
dirty_card_queue
().
enqueue
(
byte
);
if
(
thr
->
is_Java_thread
())
{
JavaThread
*
jt
=
(
JavaThread
*
)
thr
;
for
(;
byte
<=
last_byte
;
byte
++
)
{
if
(
*
byte
==
g1_young_gen
)
{
continue
;
}
if
(
*
byte
!=
dirty_card
)
{
*
byte
=
dirty_card
;
jt
->
dirty_card_queue
().
enqueue
(
byte
);
}
}
}
byte
++
;
}
else
{
}
MutexLockerEx
x
(
Shared_DirtyCardQ_lock
,
}
else
{
Mutex
::
_no_safepoint_check_flag
);
MutexLockerEx
x
(
Shared_DirtyCardQ_lock
,
for
(;
byte
<=
last_byte
;
byte
++
)
{
Mutex
::
_no_safepoint_check_flag
);
if
(
*
byte
==
g1_young_gen
)
{
while
(
byte
<=
last_byte
)
{
continue
;
if
(
*
byte
!=
dirty_card
)
{
}
*
byte
=
dirty_card
;
if
(
*
byte
!=
dirty_card
)
{
_dcqs
.
shared_dirty_card_queue
()
->
enqueue
(
byte
);
*
byte
=
dirty_card
;
_dcqs
.
shared_dirty_card_queue
()
->
enqueue
(
byte
);
}
}
}
byte
++
;
}
}
}
}
}
}
...
...
src/share/vm/gc_implementation/g1/g1SATBCardTableModRefBS.hpp
浏览文件 @
869749fb
...
@@ -38,7 +38,14 @@ class DirtyCardQueueSet;
...
@@ -38,7 +38,14 @@ class DirtyCardQueueSet;
// snapshot-at-the-beginning marking.
// snapshot-at-the-beginning marking.
class
G1SATBCardTableModRefBS
:
public
CardTableModRefBSForCTRS
{
class
G1SATBCardTableModRefBS
:
public
CardTableModRefBSForCTRS
{
protected:
enum
G1CardValues
{
g1_young_gen
=
CT_MR_BS_last_reserved
<<
1
};
public:
public:
static
int
g1_young_card_val
()
{
return
g1_young_gen
;
}
// Add "pre_val" to a set of objects that may have been disconnected from the
// Add "pre_val" to a set of objects that may have been disconnected from the
// pre-marking object graph.
// pre-marking object graph.
static
void
enqueue
(
oop
pre_val
);
static
void
enqueue
(
oop
pre_val
);
...
@@ -118,6 +125,9 @@ public:
...
@@ -118,6 +125,9 @@ public:
_byte_map
[
card_index
]
=
val
;
_byte_map
[
card_index
]
=
val
;
}
}
void
verify_g1_young_region
(
MemRegion
mr
)
PRODUCT_RETURN
;
void
g1_mark_as_young
(
const
MemRegion
&
mr
);
bool
mark_card_deferred
(
size_t
card_index
);
bool
mark_card_deferred
(
size_t
card_index
);
bool
is_card_deferred
(
size_t
card_index
)
{
bool
is_card_deferred
(
size_t
card_index
)
{
...
...
src/share/vm/gc_implementation/g1/ptrQueue.hpp
浏览文件 @
869749fb
...
@@ -80,6 +80,10 @@ public:
...
@@ -80,6 +80,10 @@ public:
void
reset
()
{
if
(
_buf
!=
NULL
)
_index
=
_sz
;
}
void
reset
()
{
if
(
_buf
!=
NULL
)
_index
=
_sz
;
}
void
enqueue
(
volatile
void
*
ptr
)
{
enqueue
((
void
*
)(
ptr
));
}
// Enqueues the given "obj".
// Enqueues the given "obj".
void
enqueue
(
void
*
ptr
)
{
void
enqueue
(
void
*
ptr
)
{
if
(
!
_active
)
return
;
if
(
!
_active
)
return
;
...
...
src/share/vm/gc_implementation/shared/vmGCOperations.hpp
浏览文件 @
869749fb
...
@@ -214,9 +214,6 @@ class VM_CollectForMetadataAllocation: public VM_GC_Operation {
...
@@ -214,9 +214,6 @@ class VM_CollectForMetadataAllocation: public VM_GC_Operation {
:
VM_GC_Operation
(
gc_count_before
,
gc_cause
,
full_gc_count_before
,
true
),
:
VM_GC_Operation
(
gc_count_before
,
gc_cause
,
full_gc_count_before
,
true
),
_loader_data
(
loader_data
),
_size
(
size
),
_mdtype
(
mdtype
),
_result
(
NULL
)
{
_loader_data
(
loader_data
),
_size
(
size
),
_mdtype
(
mdtype
),
_result
(
NULL
)
{
}
}
~
VM_CollectForMetadataAllocation
()
{
MetaspaceGC
::
set_expand_after_GC
(
false
);
}
virtual
VMOp_Type
type
()
const
{
return
VMOp_CollectForMetadataAllocation
;
}
virtual
VMOp_Type
type
()
const
{
return
VMOp_CollectForMetadataAllocation
;
}
virtual
void
doit
();
virtual
void
doit
();
MetaWord
*
result
()
const
{
return
_result
;
}
MetaWord
*
result
()
const
{
return
_result
;
}
...
...
src/share/vm/gc_interface/collectedHeap.cpp
浏览文件 @
869749fb
...
@@ -202,12 +202,6 @@ void CollectedHeap::collect_as_vm_thread(GCCause::Cause cause) {
...
@@ -202,12 +202,6 @@ void CollectedHeap::collect_as_vm_thread(GCCause::Cause cause) {
ShouldNotReachHere
();
// Unexpected use of this function
ShouldNotReachHere
();
// Unexpected use of this function
}
}
}
}
MetaWord
*
CollectedHeap
::
satisfy_failed_metadata_allocation
(
ClassLoaderData
*
loader_data
,
size_t
size
,
Metaspace
::
MetadataType
mdtype
)
{
return
collector_policy
()
->
satisfy_failed_metadata_allocation
(
loader_data
,
size
,
mdtype
);
}
void
CollectedHeap
::
pre_initialize
()
{
void
CollectedHeap
::
pre_initialize
()
{
// Used for ReduceInitialCardMarks (when COMPILER2 is used);
// Used for ReduceInitialCardMarks (when COMPILER2 is used);
...
...
src/share/vm/gc_interface/collectedHeap.hpp
浏览文件 @
869749fb
...
@@ -475,11 +475,6 @@ class CollectedHeap : public CHeapObj<mtInternal> {
...
@@ -475,11 +475,6 @@ class CollectedHeap : public CHeapObj<mtInternal> {
// the context of the vm thread.
// the context of the vm thread.
virtual
void
collect_as_vm_thread
(
GCCause
::
Cause
cause
);
virtual
void
collect_as_vm_thread
(
GCCause
::
Cause
cause
);
// Callback from VM_CollectForMetadataAllocation operation.
MetaWord
*
satisfy_failed_metadata_allocation
(
ClassLoaderData
*
loader_data
,
size_t
size
,
Metaspace
::
MetadataType
mdtype
);
// Returns the barrier set for this heap
// Returns the barrier set for this heap
BarrierSet
*
barrier_set
()
{
return
_barrier_set
;
}
BarrierSet
*
barrier_set
()
{
return
_barrier_set
;
}
...
...
src/share/vm/memory/collectorPolicy.cpp
浏览文件 @
869749fb
此差异已折叠。
点击以展开。
src/share/vm/memory/collectorPolicy.hpp
浏览文件 @
869749fb
...
@@ -101,17 +101,12 @@ class CollectorPolicy : public CHeapObj<mtGC> {
...
@@ -101,17 +101,12 @@ class CollectorPolicy : public CHeapObj<mtGC> {
// Return maximum heap alignment that may be imposed by the policy
// Return maximum heap alignment that may be imposed by the policy
static
size_t
compute_max_alignment
();
static
size_t
compute_max_alignment
();
void
set_min_alignment
(
size_t
align
)
{
_min_alignment
=
align
;
}
size_t
min_alignment
()
{
return
_min_alignment
;
}
size_t
min_alignment
()
{
return
_min_alignment
;
}
void
set_max_alignment
(
size_t
align
)
{
_max_alignment
=
align
;
}
size_t
max_alignment
()
{
return
_max_alignment
;
}
size_t
max_alignment
()
{
return
_max_alignment
;
}
size_t
initial_heap_byte_size
()
{
return
_initial_heap_byte_size
;
}
size_t
initial_heap_byte_size
()
{
return
_initial_heap_byte_size
;
}
void
set_initial_heap_byte_size
(
size_t
v
)
{
_initial_heap_byte_size
=
v
;
}
size_t
max_heap_byte_size
()
{
return
_max_heap_byte_size
;
}
size_t
max_heap_byte_size
()
{
return
_max_heap_byte_size
;
}
void
set_max_heap_byte_size
(
size_t
v
)
{
_max_heap_byte_size
=
v
;
}
size_t
min_heap_byte_size
()
{
return
_min_heap_byte_size
;
}
size_t
min_heap_byte_size
()
{
return
_min_heap_byte_size
;
}
void
set_min_heap_byte_size
(
size_t
v
)
{
_min_heap_byte_size
=
v
;
}
enum
Name
{
enum
Name
{
CollectorPolicyKind
,
CollectorPolicyKind
,
...
@@ -248,12 +243,9 @@ class GenCollectorPolicy : public CollectorPolicy {
...
@@ -248,12 +243,9 @@ class GenCollectorPolicy : public CollectorPolicy {
public:
public:
// Accessors
// Accessors
size_t
min_gen0_size
()
{
return
_min_gen0_size
;
}
size_t
min_gen0_size
()
{
return
_min_gen0_size
;
}
void
set_min_gen0_size
(
size_t
v
)
{
_min_gen0_size
=
v
;
}
size_t
initial_gen0_size
()
{
return
_initial_gen0_size
;
}
size_t
initial_gen0_size
()
{
return
_initial_gen0_size
;
}
void
set_initial_gen0_size
(
size_t
v
)
{
_initial_gen0_size
=
v
;
}
size_t
max_gen0_size
()
{
return
_max_gen0_size
;
}
size_t
max_gen0_size
()
{
return
_max_gen0_size
;
}
void
set_max_gen0_size
(
size_t
v
)
{
_max_gen0_size
=
v
;
}
virtual
int
number_of_generations
()
=
0
;
virtual
int
number_of_generations
()
=
0
;
...
@@ -302,12 +294,9 @@ class TwoGenerationCollectorPolicy : public GenCollectorPolicy {
...
@@ -302,12 +294,9 @@ class TwoGenerationCollectorPolicy : public GenCollectorPolicy {
public:
public:
// Accessors
// Accessors
size_t
min_gen1_size
()
{
return
_min_gen1_size
;
}
size_t
min_gen1_size
()
{
return
_min_gen1_size
;
}
void
set_min_gen1_size
(
size_t
v
)
{
_min_gen1_size
=
v
;
}
size_t
initial_gen1_size
()
{
return
_initial_gen1_size
;
}
size_t
initial_gen1_size
()
{
return
_initial_gen1_size
;
}
void
set_initial_gen1_size
(
size_t
v
)
{
_initial_gen1_size
=
v
;
}
size_t
max_gen1_size
()
{
return
_max_gen1_size
;
}
size_t
max_gen1_size
()
{
return
_max_gen1_size
;
}
void
set_max_gen1_size
(
size_t
v
)
{
_max_gen1_size
=
v
;
}
// Inherited methods
// Inherited methods
TwoGenerationCollectorPolicy
*
as_two_generation_policy
()
{
return
this
;
}
TwoGenerationCollectorPolicy
*
as_two_generation_policy
()
{
return
this
;
}
...
...
src/share/vm/memory/filemap.hpp
浏览文件 @
869749fb
...
@@ -26,6 +26,7 @@
...
@@ -26,6 +26,7 @@
#define SHARE_VM_MEMORY_FILEMAP_HPP
#define SHARE_VM_MEMORY_FILEMAP_HPP
#include "memory/metaspaceShared.hpp"
#include "memory/metaspaceShared.hpp"
#include "memory/metaspace.hpp"
// Layout of the file:
// Layout of the file:
// header: dump of archive instance plus versioning info, datestamp, etc.
// header: dump of archive instance plus versioning info, datestamp, etc.
...
...
src/share/vm/memory/metaspace.cpp
浏览文件 @
869749fb
此差异已折叠。
点击以展开。
src/share/vm/memory/metaspace.hpp
浏览文件 @
869749fb
...
@@ -87,9 +87,10 @@ class Metaspace : public CHeapObj<mtClass> {
...
@@ -87,9 +87,10 @@ class Metaspace : public CHeapObj<mtClass> {
friend
class
MetaspaceAux
;
friend
class
MetaspaceAux
;
public:
public:
enum
MetadataType
{
ClassType
=
0
,
enum
MetadataType
{
NonClassType
=
ClassType
+
1
,
ClassType
,
MetadataTypeCount
=
ClassType
+
2
NonClassType
,
MetadataTypeCount
};
};
enum
MetaspaceType
{
enum
MetaspaceType
{
StandardMetaspaceType
,
StandardMetaspaceType
,
...
@@ -103,6 +104,9 @@ class Metaspace : public CHeapObj<mtClass> {
...
@@ -103,6 +104,9 @@ class Metaspace : public CHeapObj<mtClass> {
private:
private:
void
initialize
(
Mutex
*
lock
,
MetaspaceType
type
);
void
initialize
(
Mutex
*
lock
,
MetaspaceType
type
);
// Get the first chunk for a Metaspace. Used for
// special cases such as the boot class loader, reflection
// class loader and anonymous class loader.
Metachunk
*
get_initialization_chunk
(
MetadataType
mdtype
,
Metachunk
*
get_initialization_chunk
(
MetadataType
mdtype
,
size_t
chunk_word_size
,
size_t
chunk_word_size
,
size_t
chunk_bunch
);
size_t
chunk_bunch
);
...
@@ -123,6 +127,9 @@ class Metaspace : public CHeapObj<mtClass> {
...
@@ -123,6 +127,9 @@ class Metaspace : public CHeapObj<mtClass> {
static
size_t
_first_chunk_word_size
;
static
size_t
_first_chunk_word_size
;
static
size_t
_first_class_chunk_word_size
;
static
size_t
_first_class_chunk_word_size
;
static
size_t
_commit_alignment
;
static
size_t
_reserve_alignment
;
SpaceManager
*
_vsm
;
SpaceManager
*
_vsm
;
SpaceManager
*
vsm
()
const
{
return
_vsm
;
}
SpaceManager
*
vsm
()
const
{
return
_vsm
;
}
...
@@ -191,12 +198,17 @@ class Metaspace : public CHeapObj<mtClass> {
...
@@ -191,12 +198,17 @@ class Metaspace : public CHeapObj<mtClass> {
Metaspace
(
Mutex
*
lock
,
MetaspaceType
type
);
Metaspace
(
Mutex
*
lock
,
MetaspaceType
type
);
~
Metaspace
();
~
Metaspace
();
// Initialize globals for Metaspace
static
void
ergo_initialize
();
static
void
global_initialize
();
static
void
global_initialize
();
static
size_t
first_chunk_word_size
()
{
return
_first_chunk_word_size
;
}
static
size_t
first_chunk_word_size
()
{
return
_first_chunk_word_size
;
}
static
size_t
first_class_chunk_word_size
()
{
return
_first_class_chunk_word_size
;
}
static
size_t
first_class_chunk_word_size
()
{
return
_first_class_chunk_word_size
;
}
static
size_t
reserve_alignment
()
{
return
_reserve_alignment
;
}
static
size_t
reserve_alignment_words
()
{
return
_reserve_alignment
/
BytesPerWord
;
}
static
size_t
commit_alignment
()
{
return
_commit_alignment
;
}
static
size_t
commit_alignment_words
()
{
return
_commit_alignment
/
BytesPerWord
;
}
char
*
bottom
()
const
;
char
*
bottom
()
const
;
size_t
used_words_slow
(
MetadataType
mdtype
)
const
;
size_t
used_words_slow
(
MetadataType
mdtype
)
const
;
size_t
free_words_slow
(
MetadataType
mdtype
)
const
;
size_t
free_words_slow
(
MetadataType
mdtype
)
const
;
...
@@ -219,6 +231,9 @@ class Metaspace : public CHeapObj<mtClass> {
...
@@ -219,6 +231,9 @@ class Metaspace : public CHeapObj<mtClass> {
static
void
purge
(
MetadataType
mdtype
);
static
void
purge
(
MetadataType
mdtype
);
static
void
purge
();
static
void
purge
();
static
void
report_metadata_oome
(
ClassLoaderData
*
loader_data
,
size_t
word_size
,
MetadataType
mdtype
,
TRAPS
);
void
print_on
(
outputStream
*
st
)
const
;
void
print_on
(
outputStream
*
st
)
const
;
// Debugging support
// Debugging support
void
verify
();
void
verify
();
...
@@ -352,17 +367,10 @@ class MetaspaceAux : AllStatic {
...
@@ -352,17 +367,10 @@ class MetaspaceAux : AllStatic {
class
MetaspaceGC
:
AllStatic
{
class
MetaspaceGC
:
AllStatic
{
// The current high-water-mark for inducing a GC. When
// The current high-water-mark for inducing a GC.
// the capacity of all space in the virtual lists reaches this value,
// When committed memory of all metaspaces reaches this value,
// a GC is induced and the value is increased. This should be changed
// a GC is induced and the value is increased. Size is in bytes.
// to the space actually used for allocations to avoid affects of
static
volatile
intptr_t
_capacity_until_GC
;
// fragmentation losses to partially used chunks. Size is in words.
static
size_t
_capacity_until_GC
;
// After a GC is done any allocation that fails should try to expand
// the capacity of the Metaspaces. This flag is set during attempts
// to allocate in the VMGCOperation that does the GC.
static
bool
_expand_after_GC
;
// For a CMS collection, signal that a concurrent collection should
// For a CMS collection, signal that a concurrent collection should
// be started.
// be started.
...
@@ -370,20 +378,16 @@ class MetaspaceGC : AllStatic {
...
@@ -370,20 +378,16 @@ class MetaspaceGC : AllStatic {
static
uint
_shrink_factor
;
static
uint
_shrink_factor
;
static
void
set_capacity_until_GC
(
size_t
v
)
{
_capacity_until_GC
=
v
;
}
static
size_t
shrink_factor
()
{
return
_shrink_factor
;
}
static
size_t
shrink_factor
()
{
return
_shrink_factor
;
}
void
set_shrink_factor
(
uint
v
)
{
_shrink_factor
=
v
;
}
void
set_shrink_factor
(
uint
v
)
{
_shrink_factor
=
v
;
}
public:
public:
static
size_t
capacity_until_GC
()
{
return
_capacity_until_GC
;
}
static
void
initialize
()
{
_capacity_until_GC
=
MetaspaceSize
;
}
static
void
inc_capacity_until_GC
(
size_t
v
)
{
_capacity_until_GC
+=
v
;
}
static
void
dec_capacity_until_GC
(
size_t
v
)
{
static
size_t
capacity_until_GC
();
_capacity_until_GC
=
_capacity_until_GC
>
v
?
_capacity_until_GC
-
v
:
0
;
static
size_t
inc_capacity_until_GC
(
size_t
v
);
}
static
size_t
dec_capacity_until_GC
(
size_t
v
);
static
bool
expand_after_GC
()
{
return
_expand_after_GC
;
}
static
void
set_expand_after_GC
(
bool
v
)
{
_expand_after_GC
=
v
;
}
static
bool
should_concurrent_collect
()
{
return
_should_concurrent_collect
;
}
static
bool
should_concurrent_collect
()
{
return
_should_concurrent_collect
;
}
static
void
set_should_concurrent_collect
(
bool
v
)
{
static
void
set_should_concurrent_collect
(
bool
v
)
{
...
@@ -391,11 +395,14 @@ class MetaspaceGC : AllStatic {
...
@@ -391,11 +395,14 @@ class MetaspaceGC : AllStatic {
}
}
// The amount to increase the high-water-mark (_capacity_until_GC)
// The amount to increase the high-water-mark (_capacity_until_GC)
static
size_t
delta_capacity_until_GC
(
size_t
word_size
);
static
size_t
delta_capacity_until_GC
(
size_t
bytes
);
// Tells if we have can expand metaspace without hitting set limits.
static
bool
can_expand
(
size_t
words
,
bool
is_class
);
//
It is expected that this will be called when the current capacity
//
Returns amount that we can expand without hitting a GC,
//
has been used and a GC should be considered
.
//
measured in words
.
static
bool
should_expand
(
VirtualSpaceList
*
vsl
,
size_t
word_size
);
static
size_t
allowed_expansion
(
);
// Calculate the new high-water mark at which to induce
// Calculate the new high-water mark at which to induce
// a GC.
// a GC.
...
...
src/share/vm/opto/graphKit.cpp
浏览文件 @
869749fb
...
@@ -3713,7 +3713,8 @@ void GraphKit::g1_write_barrier_post(Node* oop_store,
...
@@ -3713,7 +3713,8 @@ void GraphKit::g1_write_barrier_post(Node* oop_store,
Node
*
no_base
=
__
top
();
Node
*
no_base
=
__
top
();
float
likely
=
PROB_LIKELY
(
0.999
);
float
likely
=
PROB_LIKELY
(
0.999
);
float
unlikely
=
PROB_UNLIKELY
(
0.999
);
float
unlikely
=
PROB_UNLIKELY
(
0.999
);
Node
*
zero
=
__
ConI
(
0
);
Node
*
young_card
=
__
ConI
((
jint
)
G1SATBCardTableModRefBS
::
g1_young_card_val
());
Node
*
dirty_card
=
__
ConI
((
jint
)
CardTableModRefBS
::
dirty_card_val
());
Node
*
zeroX
=
__
ConX
(
0
);
Node
*
zeroX
=
__
ConX
(
0
);
// Get the alias_index for raw card-mark memory
// Get the alias_index for raw card-mark memory
...
@@ -3769,8 +3770,16 @@ void GraphKit::g1_write_barrier_post(Node* oop_store,
...
@@ -3769,8 +3770,16 @@ void GraphKit::g1_write_barrier_post(Node* oop_store,
// load the original value of the card
// load the original value of the card
Node
*
card_val
=
__
load
(
__
ctrl
(),
card_adr
,
TypeInt
::
INT
,
T_BYTE
,
Compile
::
AliasIdxRaw
);
Node
*
card_val
=
__
load
(
__
ctrl
(),
card_adr
,
TypeInt
::
INT
,
T_BYTE
,
Compile
::
AliasIdxRaw
);
__
if_then
(
card_val
,
BoolTest
::
ne
,
zero
);
{
__
if_then
(
card_val
,
BoolTest
::
ne
,
young_card
);
{
g1_mark_card
(
ideal
,
card_adr
,
oop_store
,
alias_idx
,
index
,
index_adr
,
buffer
,
tf
);
sync_kit
(
ideal
);
// Use Op_MemBarVolatile to achieve the effect of a StoreLoad barrier.
insert_mem_bar
(
Op_MemBarVolatile
,
oop_store
);
__
sync_kit
(
this
);
Node
*
card_val_reload
=
__
load
(
__
ctrl
(),
card_adr
,
TypeInt
::
INT
,
T_BYTE
,
Compile
::
AliasIdxRaw
);
__
if_then
(
card_val_reload
,
BoolTest
::
ne
,
dirty_card
);
{
g1_mark_card
(
ideal
,
card_adr
,
oop_store
,
alias_idx
,
index
,
index_adr
,
buffer
,
tf
);
}
__
end_if
();
}
__
end_if
();
}
__
end_if
();
}
__
end_if
();
}
__
end_if
();
}
__
end_if
();
}
__
end_if
();
...
...
src/share/vm/runtime/arguments.cpp
浏览文件 @
869749fb
...
@@ -2657,16 +2657,16 @@ jint Arguments::parse_each_vm_init_arg(const JavaVMInitArgs* args,
...
@@ -2657,16 +2657,16 @@ jint Arguments::parse_each_vm_init_arg(const JavaVMInitArgs* args,
FLAG_SET_CMDLINE
(
bool
,
BackgroundCompilation
,
false
);
FLAG_SET_CMDLINE
(
bool
,
BackgroundCompilation
,
false
);
// -Xmn for compatibility with other JVM vendors
// -Xmn for compatibility with other JVM vendors
}
else
if
(
match_option
(
option
,
"-Xmn"
,
&
tail
))
{
}
else
if
(
match_option
(
option
,
"-Xmn"
,
&
tail
))
{
julong
long_initial_
eden
_size
=
0
;
julong
long_initial_
young
_size
=
0
;
ArgsRange
errcode
=
parse_memory_size
(
tail
,
&
long_initial_
eden
_size
,
1
);
ArgsRange
errcode
=
parse_memory_size
(
tail
,
&
long_initial_
young
_size
,
1
);
if
(
errcode
!=
arg_in_range
)
{
if
(
errcode
!=
arg_in_range
)
{
jio_fprintf
(
defaultStream
::
error_stream
(),
jio_fprintf
(
defaultStream
::
error_stream
(),
"Invalid initial
ede
n size: %s
\n
"
,
option
->
optionString
);
"Invalid initial
young generatio
n size: %s
\n
"
,
option
->
optionString
);
describe_range_error
(
errcode
);
describe_range_error
(
errcode
);
return
JNI_EINVAL
;
return
JNI_EINVAL
;
}
}
FLAG_SET_CMDLINE
(
uintx
,
MaxNewSize
,
(
uintx
)
long_initial_
eden
_size
);
FLAG_SET_CMDLINE
(
uintx
,
MaxNewSize
,
(
uintx
)
long_initial_
young
_size
);
FLAG_SET_CMDLINE
(
uintx
,
NewSize
,
(
uintx
)
long_initial_
eden
_size
);
FLAG_SET_CMDLINE
(
uintx
,
NewSize
,
(
uintx
)
long_initial_
young
_size
);
// -Xms
// -Xms
}
else
if
(
match_option
(
option
,
"-Xms"
,
&
tail
))
{
}
else
if
(
match_option
(
option
,
"-Xms"
,
&
tail
))
{
julong
long_initial_heap_size
=
0
;
julong
long_initial_heap_size
=
0
;
...
@@ -3666,6 +3666,9 @@ jint Arguments::apply_ergo() {
...
@@ -3666,6 +3666,9 @@ jint Arguments::apply_ergo() {
assert
(
verify_serial_gc_flags
(),
"SerialGC unset"
);
assert
(
verify_serial_gc_flags
(),
"SerialGC unset"
);
#endif // INCLUDE_ALL_GCS
#endif // INCLUDE_ALL_GCS
// Initialize Metaspace flags and alignments.
Metaspace
::
ergo_initialize
();
// Set bytecode rewriting flags
// Set bytecode rewriting flags
set_bytecode_flags
();
set_bytecode_flags
();
...
...
src/share/vm/runtime/globals.hpp
浏览文件 @
869749fb
此差异已折叠。
点击以展开。
src/share/vm/runtime/virtualspace.cpp
浏览文件 @
869749fb
...
@@ -368,8 +368,15 @@ VirtualSpace::VirtualSpace() {
...
@@ -368,8 +368,15 @@ VirtualSpace::VirtualSpace() {
bool
VirtualSpace
::
initialize
(
ReservedSpace
rs
,
size_t
committed_size
)
{
bool
VirtualSpace
::
initialize
(
ReservedSpace
rs
,
size_t
committed_size
)
{
const
size_t
max_commit_granularity
=
os
::
page_size_for_region
(
rs
.
size
(),
rs
.
size
(),
1
);
return
initialize_with_granularity
(
rs
,
committed_size
,
max_commit_granularity
);
}
bool
VirtualSpace
::
initialize_with_granularity
(
ReservedSpace
rs
,
size_t
committed_size
,
size_t
max_commit_granularity
)
{
if
(
!
rs
.
is_reserved
())
return
false
;
// allocation failed.
if
(
!
rs
.
is_reserved
())
return
false
;
// allocation failed.
assert
(
_low_boundary
==
NULL
,
"VirtualSpace already initialized"
);
assert
(
_low_boundary
==
NULL
,
"VirtualSpace already initialized"
);
assert
(
max_commit_granularity
>
0
,
"Granularity must be non-zero."
);
_low_boundary
=
rs
.
base
();
_low_boundary
=
rs
.
base
();
_high_boundary
=
low_boundary
()
+
rs
.
size
();
_high_boundary
=
low_boundary
()
+
rs
.
size
();
...
@@ -390,7 +397,7 @@ bool VirtualSpace::initialize(ReservedSpace rs, size_t committed_size) {
...
@@ -390,7 +397,7 @@ bool VirtualSpace::initialize(ReservedSpace rs, size_t committed_size) {
// No attempt is made to force large page alignment at the very top and
// No attempt is made to force large page alignment at the very top and
// bottom of the space if they are not aligned so already.
// bottom of the space if they are not aligned so already.
_lower_alignment
=
os
::
vm_page_size
();
_lower_alignment
=
os
::
vm_page_size
();
_middle_alignment
=
os
::
page_size_for_region
(
rs
.
size
(),
rs
.
size
(),
1
)
;
_middle_alignment
=
max_commit_granularity
;
_upper_alignment
=
os
::
vm_page_size
();
_upper_alignment
=
os
::
vm_page_size
();
// End of each region
// End of each region
...
@@ -966,17 +973,52 @@ void TestReservedSpace_test() {
...
@@ -966,17 +973,52 @@ void TestReservedSpace_test() {
class
TestVirtualSpace
:
AllStatic
{
class
TestVirtualSpace
:
AllStatic
{
enum
TestLargePages
{
Default
,
Disable
,
Reserve
,
Commit
};
static
ReservedSpace
reserve_memory
(
size_t
reserve_size_aligned
,
TestLargePages
mode
)
{
switch
(
mode
)
{
default:
case
Default
:
case
Reserve
:
return
ReservedSpace
(
reserve_size_aligned
);
case
Disable
:
case
Commit
:
return
ReservedSpace
(
reserve_size_aligned
,
os
::
vm_allocation_granularity
(),
/* large */
false
,
/* exec */
false
);
}
}
static
bool
initialize_virtual_space
(
VirtualSpace
&
vs
,
ReservedSpace
rs
,
TestLargePages
mode
)
{
switch
(
mode
)
{
default:
case
Default
:
case
Reserve
:
return
vs
.
initialize
(
rs
,
0
);
case
Disable
:
return
vs
.
initialize_with_granularity
(
rs
,
0
,
os
::
vm_page_size
());
case
Commit
:
return
vs
.
initialize_with_granularity
(
rs
,
0
,
os
::
page_size_for_region
(
rs
.
size
(),
rs
.
size
(),
1
));
}
}
public:
public:
static
void
test_virtual_space_actual_committed_space
(
size_t
reserve_size
,
size_t
commit_size
)
{
static
void
test_virtual_space_actual_committed_space
(
size_t
reserve_size
,
size_t
commit_size
,
TestLargePages
mode
=
Default
)
{
size_t
granularity
=
os
::
vm_allocation_granularity
();
size_t
granularity
=
os
::
vm_allocation_granularity
();
size_t
reserve_size_aligned
=
align_size_up
(
reserve_size
,
granularity
);
size_t
reserve_size_aligned
=
align_size_up
(
reserve_size
,
granularity
);
ReservedSpace
reserved
(
reserve_size_aligned
);
ReservedSpace
reserved
=
reserve_memory
(
reserve_size_aligned
,
mode
);
assert
(
reserved
.
is_reserved
(),
"Must be"
);
assert
(
reserved
.
is_reserved
(),
"Must be"
);
VirtualSpace
vs
;
VirtualSpace
vs
;
bool
initialized
=
vs
.
initialize
(
reserved
,
0
);
bool
initialized
=
initialize_virtual_space
(
vs
,
reserved
,
mode
);
assert
(
initialized
,
"Failed to initialize VirtualSpace"
);
assert
(
initialized
,
"Failed to initialize VirtualSpace"
);
vs
.
expand_by
(
commit_size
,
false
);
vs
.
expand_by
(
commit_size
,
false
);
...
@@ -986,7 +1028,10 @@ class TestVirtualSpace : AllStatic {
...
@@ -986,7 +1028,10 @@ class TestVirtualSpace : AllStatic {
}
else
{
}
else
{
assert_ge
(
vs
.
actual_committed_size
(),
commit_size
);
assert_ge
(
vs
.
actual_committed_size
(),
commit_size
);
// Approximate the commit granularity.
// Approximate the commit granularity.
size_t
commit_granularity
=
UseLargePages
?
os
::
large_page_size
()
:
os
::
vm_page_size
();
// Make sure that we don't commit using large pages
// if large pages has been disabled for this VirtualSpace.
size_t
commit_granularity
=
(
mode
==
Disable
||
!
UseLargePages
)
?
os
::
vm_page_size
()
:
os
::
large_page_size
();
assert_lt
(
vs
.
actual_committed_size
(),
commit_size
+
commit_granularity
);
assert_lt
(
vs
.
actual_committed_size
(),
commit_size
+
commit_granularity
);
}
}
...
@@ -1042,9 +1087,40 @@ class TestVirtualSpace : AllStatic {
...
@@ -1042,9 +1087,40 @@ class TestVirtualSpace : AllStatic {
test_virtual_space_actual_committed_space
(
10
*
M
,
10
*
M
);
test_virtual_space_actual_committed_space
(
10
*
M
,
10
*
M
);
}
}
static
void
test_virtual_space_disable_large_pages
()
{
if
(
!
UseLargePages
)
{
return
;
}
// These test cases verify that if we force VirtualSpace to disable large pages
test_virtual_space_actual_committed_space
(
10
*
M
,
0
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
4
*
K
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
8
*
K
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
1
*
M
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
2
*
M
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
5
*
M
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
10
*
M
,
Disable
);
test_virtual_space_actual_committed_space
(
10
*
M
,
0
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
4
*
K
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
8
*
K
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
1
*
M
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
2
*
M
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
5
*
M
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
10
*
M
,
Reserve
);
test_virtual_space_actual_committed_space
(
10
*
M
,
0
,
Commit
);
test_virtual_space_actual_committed_space
(
10
*
M
,
4
*
K
,
Commit
);
test_virtual_space_actual_committed_space
(
10
*
M
,
8
*
K
,
Commit
);
test_virtual_space_actual_committed_space
(
10
*
M
,
1
*
M
,
Commit
);
test_virtual_space_actual_committed_space
(
10
*
M
,
2
*
M
,
Commit
);
test_virtual_space_actual_committed_space
(
10
*
M
,
5
*
M
,
Commit
);
test_virtual_space_actual_committed_space
(
10
*
M
,
10
*
M
,
Commit
);
}
static
void
test_virtual_space
()
{
static
void
test_virtual_space
()
{
test_virtual_space_actual_committed_space
();
test_virtual_space_actual_committed_space
();
test_virtual_space_actual_committed_space_one_large_page
();
test_virtual_space_actual_committed_space_one_large_page
();
test_virtual_space_disable_large_pages
();
}
}
};
};
...
...
src/share/vm/runtime/virtualspace.hpp
浏览文件 @
869749fb
...
@@ -178,6 +178,7 @@ class VirtualSpace VALUE_OBJ_CLASS_SPEC {
...
@@ -178,6 +178,7 @@ class VirtualSpace VALUE_OBJ_CLASS_SPEC {
public:
public:
// Initialization
// Initialization
VirtualSpace
();
VirtualSpace
();
bool
initialize_with_granularity
(
ReservedSpace
rs
,
size_t
committed_byte_size
,
size_t
max_commit_ganularity
);
bool
initialize
(
ReservedSpace
rs
,
size_t
committed_byte_size
);
bool
initialize
(
ReservedSpace
rs
,
size_t
committed_byte_size
);
// Destruction
// Destruction
...
...
src/share/vm/runtime/vmStructs.cpp
浏览文件 @
869749fb
...
@@ -716,11 +716,17 @@ typedef BinaryTreeDictionary<Metablock, FreeList> MetablockTreeDictionary;
...
@@ -716,11 +716,17 @@ typedef BinaryTreeDictionary<Metablock, FreeList> MetablockTreeDictionary;
nonstatic_field(PlaceholderEntry, _loader_data, ClassLoaderData*) \
nonstatic_field(PlaceholderEntry, _loader_data, ClassLoaderData*) \
\
\
/**************************/
\
/**************************/
\
/* Pro
ctectionDomainEntry
*/
\
/* Pro
tectionDomainEntry
*/
\
/**************************/
\
/**************************/
\
\
\
nonstatic_field(ProtectionDomainEntry, _next, ProtectionDomainEntry*) \
nonstatic_field(ProtectionDomainEntry, _next, ProtectionDomainEntry*) \
nonstatic_field(ProtectionDomainEntry, _protection_domain, oop) \
nonstatic_field(ProtectionDomainEntry, _pd_cache, ProtectionDomainCacheEntry*) \
\
/*******************************/
\
/* ProtectionDomainCacheEntry */
\
/*******************************/
\
\
nonstatic_field(ProtectionDomainCacheEntry, _literal, oop) \
\
\
/*************************/
\
/*************************/
\
/* LoaderConstraintEntry */
\
/* LoaderConstraintEntry */
\
...
@@ -1563,6 +1569,7 @@ typedef BinaryTreeDictionary<Metablock, FreeList> MetablockTreeDictionary;
...
@@ -1563,6 +1569,7 @@ typedef BinaryTreeDictionary<Metablock, FreeList> MetablockTreeDictionary;
declare_toplevel_type(SystemDictionary) \
declare_toplevel_type(SystemDictionary) \
declare_toplevel_type(vmSymbols) \
declare_toplevel_type(vmSymbols) \
declare_toplevel_type(ProtectionDomainEntry) \
declare_toplevel_type(ProtectionDomainEntry) \
declare_toplevel_type(ProtectionDomainCacheEntry) \
\
\
declare_toplevel_type(GenericGrowableArray) \
declare_toplevel_type(GenericGrowableArray) \
declare_toplevel_type(GrowableArray<int>) \
declare_toplevel_type(GrowableArray<int>) \
...
...
src/share/vm/services/memoryService.hpp
浏览文件 @
869749fb
...
@@ -148,6 +148,12 @@ public:
...
@@ -148,6 +148,12 @@ public:
static
void
track_code_cache_memory_usage
()
{
static
void
track_code_cache_memory_usage
()
{
track_memory_pool_usage
(
_code_heap_pool
);
track_memory_pool_usage
(
_code_heap_pool
);
}
}
static
void
track_metaspace_memory_usage
()
{
track_memory_pool_usage
(
_metaspace_pool
);
}
static
void
track_compressed_class_memory_usage
()
{
track_memory_pool_usage
(
_compressed_class_pool
);
}
static
void
track_memory_pool_usage
(
MemoryPool
*
pool
);
static
void
track_memory_pool_usage
(
MemoryPool
*
pool
);
static
void
gc_begin
(
bool
fullGC
,
bool
recordGCBeginTime
,
static
void
gc_begin
(
bool
fullGC
,
bool
recordGCBeginTime
,
...
...
src/share/vm/utilities/globalDefinitions.hpp
浏览文件 @
869749fb
...
@@ -326,12 +326,15 @@ typedef jlong s8;
...
@@ -326,12 +326,15 @@ typedef jlong s8;
const
int
max_method_code_size
=
64
*
K
-
1
;
// JVM spec, 2nd ed. section 4.8.1 (p.134)
const
int
max_method_code_size
=
64
*
K
-
1
;
// JVM spec, 2nd ed. section 4.8.1 (p.134)
// Default ProtectionDomainCacheSize values
const
int
defaultProtectionDomainCacheSize
=
NOT_LP64
(
137
)
LP64_ONLY
(
2017
);
//----------------------------------------------------------------------------------------------------
//----------------------------------------------------------------------------------------------------
// Default and minimum StringTableSize values
// Default and minimum StringTableSize values
const
int
defaultStringTableSize
=
NOT_LP64
(
1009
)
LP64_ONLY
(
60013
);
const
int
defaultStringTableSize
=
NOT_LP64
(
1009
)
LP64_ONLY
(
60013
);
const
int
minimumStringTableSize
=
1009
;
const
int
minimumStringTableSize
=
1009
;
const
int
defaultSymbolTableSize
=
20011
;
const
int
defaultSymbolTableSize
=
20011
;
const
int
minimumSymbolTableSize
=
1009
;
const
int
minimumSymbolTableSize
=
1009
;
...
...
test/runtime/memory/LargePages/TestLargePagesFlags.java
0 → 100644
浏览文件 @
869749fb
/*
* Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
/* @test TestLargePagesFlags
* @summary Tests how large pages are choosen depending on the given large pages flag combinations.
* @library /testlibrary
* @run main TestLargePagesFlags
*/
import
com.oracle.java.testlibrary.OutputAnalyzer
;
import
com.oracle.java.testlibrary.Platform
;
import
com.oracle.java.testlibrary.ProcessTools
;
import
java.util.ArrayList
;
public
class
TestLargePagesFlags
{
public
static
void
main
(
String
[]
args
)
throws
Exception
{
if
(!
Platform
.
isLinux
())
{
System
.
out
.
println
(
"Skipping. TestLargePagesFlags has only been implemented for Linux."
);
return
;
}
testUseTransparentHugePages
();
testUseHugeTLBFS
();
testUseSHM
();
testCombinations
();
}
public
static
void
testUseTransparentHugePages
()
throws
Exception
{
if
(!
canUse
(
UseTransparentHugePages
(
true
)))
{
System
.
out
.
println
(
"Skipping testUseTransparentHugePages"
);
return
;
}
// -XX:-UseLargePages overrides all other flags.
new
FlagTester
()
.
use
(
UseLargePages
(
false
),
UseTransparentHugePages
(
true
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Explicitly turn on UseTransparentHugePages.
new
FlagTester
()
.
use
(
UseTransparentHugePages
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
true
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseTransparentHugePages
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
true
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Setting a specific large pages flag will turn
// off heuristics to choose large pages type.
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Don't turn on UseTransparentHugePages
// unless the user explicitly asks for them.
new
FlagTester
()
.
use
(
UseLargePages
(
true
))
.
expect
(
UseTransparentHugePages
(
false
));
}
public
static
void
testUseHugeTLBFS
()
throws
Exception
{
if
(!
canUse
(
UseHugeTLBFS
(
true
)))
{
System
.
out
.
println
(
"Skipping testUseHugeTLBFS"
);
return
;
}
// -XX:-UseLargePages overrides all other flags.
new
FlagTester
()
.
use
(
UseLargePages
(
false
),
UseHugeTLBFS
(
true
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Explicitly turn on UseHugeTLBFS.
new
FlagTester
()
.
use
(
UseHugeTLBFS
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
true
),
UseSHM
(
false
));
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseHugeTLBFS
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
true
),
UseSHM
(
false
));
// Setting a specific large pages flag will turn
// off heuristics to choose large pages type.
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseHugeTLBFS
(
false
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Using UseLargePages will default to UseHugeTLBFS large pages.
new
FlagTester
()
.
use
(
UseLargePages
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
true
),
UseSHM
(
false
));
}
public
static
void
testUseSHM
()
throws
Exception
{
if
(!
canUse
(
UseSHM
(
true
)))
{
System
.
out
.
println
(
"Skipping testUseSHM"
);
return
;
}
// -XX:-UseLargePages overrides all other flags.
new
FlagTester
()
.
use
(
UseLargePages
(
false
),
UseSHM
(
true
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Explicitly turn on UseSHM.
new
FlagTester
()
.
use
(
UseSHM
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
true
))
;
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseSHM
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
true
))
;
// Setting a specific large pages flag will turn
// off heuristics to choose large pages type.
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseSHM
(
false
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
// Setting UseLargePages can allow the system to choose
// UseHugeTLBFS instead of UseSHM, but never UseTransparentHugePages.
new
FlagTester
()
.
use
(
UseLargePages
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
));
}
public
static
void
testCombinations
()
throws
Exception
{
if
(!
canUse
(
UseSHM
(
true
))
||
!
canUse
(
UseHugeTLBFS
(
true
)))
{
System
.
out
.
println
(
"Skipping testUseHugeTLBFSAndUseSHMCombination"
);
return
;
}
// UseHugeTLBFS takes precedence over SHM.
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseHugeTLBFS
(
true
),
UseSHM
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
true
),
UseSHM
(
false
));
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseHugeTLBFS
(
false
),
UseSHM
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
true
));
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseHugeTLBFS
(
true
),
UseSHM
(
false
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
true
),
UseSHM
(
false
));
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
))
.
expect
(
UseLargePages
(
false
),
UseTransparentHugePages
(
false
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
if
(!
canUse
(
UseTransparentHugePages
(
true
)))
{
return
;
}
// UseTransparentHugePages takes precedence.
new
FlagTester
()
.
use
(
UseLargePages
(
true
),
UseTransparentHugePages
(
true
),
UseHugeTLBFS
(
true
),
UseSHM
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
true
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
new
FlagTester
()
.
use
(
UseTransparentHugePages
(
true
),
UseHugeTLBFS
(
true
),
UseSHM
(
true
))
.
expect
(
UseLargePages
(
true
),
UseTransparentHugePages
(
true
),
UseHugeTLBFS
(
false
),
UseSHM
(
false
));
}
private
static
class
FlagTester
{
private
Flag
[]
useFlags
;
public
FlagTester
use
(
Flag
...
useFlags
)
{
this
.
useFlags
=
useFlags
;
return
this
;
}
public
void
expect
(
Flag
...
expectedFlags
)
throws
Exception
{
if
(
useFlags
==
null
)
{
throw
new
IllegalStateException
(
"Must run use() before expect()"
);
}
OutputAnalyzer
output
=
executeNewJVM
(
useFlags
);
for
(
Flag
flag
:
expectedFlags
)
{
System
.
out
.
println
(
"Looking for: "
+
flag
.
flagString
());
String
strValue
=
output
.
firstMatch
(
".* "
+
flag
.
name
()
+
" .* :?= (\\S+).*"
,
1
);
if
(
strValue
==
null
)
{
throw
new
RuntimeException
(
"Flag "
+
flag
.
name
()
+
" couldn't be found"
);
}
if
(!
flag
.
value
().
equals
(
strValue
))
{
throw
new
RuntimeException
(
"Wrong value for: "
+
flag
.
name
()
+
" expected: "
+
flag
.
value
()
+
" got: "
+
strValue
);
}
}
output
.
shouldHaveExitValue
(
0
);
}
}
private
static
OutputAnalyzer
executeNewJVM
(
Flag
...
flags
)
throws
Exception
{
ArrayList
<
String
>
args
=
new
ArrayList
<>();
for
(
Flag
flag
:
flags
)
{
args
.
add
(
flag
.
flagString
());
}
args
.
add
(
"-XX:+PrintFlagsFinal"
);
args
.
add
(
"-version"
);
ProcessBuilder
pb
=
ProcessTools
.
createJavaProcessBuilder
(
args
.
toArray
(
new
String
[
args
.
size
()]));
OutputAnalyzer
output
=
new
OutputAnalyzer
(
pb
.
start
());
return
output
;
}
private
static
boolean
canUse
(
Flag
flag
)
{
try
{
new
FlagTester
().
use
(
flag
).
expect
(
flag
);
}
catch
(
Exception
e
)
{
return
false
;
}
return
true
;
}
private
static
Flag
UseLargePages
(
boolean
value
)
{
return
new
BooleanFlag
(
"UseLargePages"
,
value
);
}
private
static
Flag
UseTransparentHugePages
(
boolean
value
)
{
return
new
BooleanFlag
(
"UseTransparentHugePages"
,
value
);
}
private
static
Flag
UseHugeTLBFS
(
boolean
value
)
{
return
new
BooleanFlag
(
"UseHugeTLBFS"
,
value
);
}
private
static
Flag
UseSHM
(
boolean
value
)
{
return
new
BooleanFlag
(
"UseSHM"
,
value
);
}
private
static
class
BooleanFlag
implements
Flag
{
private
String
name
;
private
boolean
value
;
BooleanFlag
(
String
name
,
boolean
value
)
{
this
.
name
=
name
;
this
.
value
=
value
;
}
public
String
flagString
()
{
return
"-XX:"
+
(
value
?
"+"
:
"-"
)
+
name
;
}
public
String
name
()
{
return
name
;
}
public
String
value
()
{
return
Boolean
.
toString
(
value
);
}
}
private
static
interface
Flag
{
public
String
flagString
();
public
String
name
();
public
String
value
();
}
}
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录