Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openanolis
dragonwell8_hotspot
提交
3686ad20
D
dragonwell8_hotspot
项目概览
openanolis
/
dragonwell8_hotspot
通知
2
Star
2
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
dragonwell8_hotspot
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
3686ad20
编写于
8月 15, 2013
作者:
R
rbackman
浏览文件
操作
浏览文件
下载
差异文件
Merge
上级
e589070a
9df8cd27
变更
43
展开全部
隐藏空白更改
内联
并排
Showing
43 changed file
with
353 addition
and
695 deletion
+353
-695
agent/src/share/classes/sun/jvm/hotspot/opto/PhaseCFG.java
agent/src/share/classes/sun/jvm/hotspot/opto/PhaseCFG.java
+1
-1
make/bsd/makefiles/adlc.make
make/bsd/makefiles/adlc.make
+2
-4
make/linux/makefiles/adlc.make
make/linux/makefiles/adlc.make
+2
-4
make/solaris/makefiles/adlc.make
make/solaris/makefiles/adlc.make
+2
-4
make/windows/makefiles/adlc.make
make/windows/makefiles/adlc.make
+2
-4
src/os_cpu/bsd_x86/vm/bsd_x86_32.ad
src/os_cpu/bsd_x86/vm/bsd_x86_32.ad
+0
-26
src/os_cpu/bsd_x86/vm/bsd_x86_64.ad
src/os_cpu/bsd_x86/vm/bsd_x86_64.ad
+0
-65
src/os_cpu/linux_x86/vm/linux_x86_32.ad
src/os_cpu/linux_x86/vm/linux_x86_32.ad
+0
-26
src/os_cpu/linux_x86/vm/linux_x86_64.ad
src/os_cpu/linux_x86/vm/linux_x86_64.ad
+0
-65
src/os_cpu/solaris_sparc/vm/solaris_sparc.ad
src/os_cpu/solaris_sparc/vm/solaris_sparc.ad
+0
-27
src/os_cpu/solaris_x86/vm/solaris_x86_32.ad
src/os_cpu/solaris_x86/vm/solaris_x86_32.ad
+0
-26
src/os_cpu/solaris_x86/vm/solaris_x86_64.ad
src/os_cpu/solaris_x86/vm/solaris_x86_64.ad
+0
-63
src/os_cpu/windows_x86/vm/windows_x86_32.ad
src/os_cpu/windows_x86/vm/windows_x86_32.ad
+0
-26
src/os_cpu/windows_x86/vm/windows_x86_64.ad
src/os_cpu/windows_x86/vm/windows_x86_64.ad
+0
-63
src/share/vm/opto/block.cpp
src/share/vm/opto/block.cpp
+60
-53
src/share/vm/opto/block.hpp
src/share/vm/opto/block.hpp
+42
-17
src/share/vm/opto/buildOopMap.cpp
src/share/vm/opto/buildOopMap.cpp
+11
-8
src/share/vm/opto/c2_globals.hpp
src/share/vm/opto/c2_globals.hpp
+3
-0
src/share/vm/opto/chaitin.cpp
src/share/vm/opto/chaitin.cpp
+15
-17
src/share/vm/opto/coalesce.cpp
src/share/vm/opto/coalesce.cpp
+20
-16
src/share/vm/opto/compile.cpp
src/share/vm/opto/compile.cpp
+2
-2
src/share/vm/opto/domgraph.cpp
src/share/vm/opto/domgraph.cpp
+2
-2
src/share/vm/opto/gcm.cpp
src/share/vm/opto/gcm.cpp
+70
-67
src/share/vm/opto/idealGraphPrinter.cpp
src/share/vm/opto/idealGraphPrinter.cpp
+3
-3
src/share/vm/opto/ifg.cpp
src/share/vm/opto/ifg.cpp
+2
-2
src/share/vm/opto/lcm.cpp
src/share/vm/opto/lcm.cpp
+52
-45
src/share/vm/opto/live.cpp
src/share/vm/opto/live.cpp
+6
-4
src/share/vm/opto/loopTransform.cpp
src/share/vm/opto/loopTransform.cpp
+1
-3
src/share/vm/opto/node.hpp
src/share/vm/opto/node.hpp
+0
-1
src/share/vm/opto/output.cpp
src/share/vm/opto/output.cpp
+18
-19
src/share/vm/opto/output.hpp
src/share/vm/opto/output.hpp
+0
-3
src/share/vm/opto/postaloc.cpp
src/share/vm/opto/postaloc.cpp
+11
-8
src/share/vm/opto/reg_split.cpp
src/share/vm/opto/reg_split.cpp
+11
-11
src/share/vm/runtime/vmStructs.cpp
src/share/vm/runtime/vmStructs.cpp
+1
-1
test/compiler/whitebox/ClearMethodStateTest.java
test/compiler/whitebox/ClearMethodStateTest.java
+1
-1
test/compiler/whitebox/CompilerWhiteBoxTest.java
test/compiler/whitebox/CompilerWhiteBoxTest.java
+6
-1
test/compiler/whitebox/DeoptimizeAllTest.java
test/compiler/whitebox/DeoptimizeAllTest.java
+1
-1
test/compiler/whitebox/DeoptimizeMethodTest.java
test/compiler/whitebox/DeoptimizeMethodTest.java
+1
-1
test/compiler/whitebox/EnqueueMethodForCompilationTest.java
test/compiler/whitebox/EnqueueMethodForCompilationTest.java
+1
-1
test/compiler/whitebox/IsMethodCompilableTest.java
test/compiler/whitebox/IsMethodCompilableTest.java
+1
-1
test/compiler/whitebox/MakeMethodNotCompilableTest.java
test/compiler/whitebox/MakeMethodNotCompilableTest.java
+1
-1
test/compiler/whitebox/SetDontInlineMethodTest.java
test/compiler/whitebox/SetDontInlineMethodTest.java
+1
-1
test/compiler/whitebox/SetForceInlineMethodTest.java
test/compiler/whitebox/SetForceInlineMethodTest.java
+1
-1
未找到文件。
agent/src/share/classes/sun/jvm/hotspot/opto/PhaseCFG.java
浏览文件 @
3686ad20
...
...
@@ -44,7 +44,7 @@ public class PhaseCFG extends Phase {
Type
type
=
db
.
lookupType
(
"PhaseCFG"
);
numBlocksField
=
new
CIntField
(
type
.
getCIntegerField
(
"_num_blocks"
),
0
);
blocksField
=
type
.
getAddressField
(
"_blocks"
);
bbsField
=
type
.
getAddressField
(
"_
bbs
"
);
bbsField
=
type
.
getAddressField
(
"_
node_to_block_mapping
"
);
brootField
=
type
.
getAddressField
(
"_broot"
);
}
...
...
make/bsd/makefiles/adlc.make
浏览文件 @
3686ad20
...
...
@@ -41,13 +41,11 @@ SOURCE.AD = $(OUTDIR)/$(OS)_$(Platform_arch_model).ad
ifeq
("${Platform_arch_model}", "${Platform_arch}")
SOURCES.AD
=
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/os_cpu/
$(OS)
_
$(ARCH)
/vm/
$(OS)
_
$(Platform_arch_model)
.ad
)
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
else
SOURCES.AD
=
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/os_cpu/
$(OS)
_
$(ARCH)
/vm/
$(OS)
_
$(Platform_arch_model)
.ad
)
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch)
.ad
)
endif
EXEC
=
$(OUTDIR)
/adlc
...
...
make/linux/makefiles/adlc.make
浏览文件 @
3686ad20
...
...
@@ -41,13 +41,11 @@ SOURCE.AD = $(OUTDIR)/$(OS)_$(Platform_arch_model).ad
ifeq
("${Platform_arch_model}", "${Platform_arch}")
SOURCES.AD
=
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/os_cpu/
$(OS)
_
$(ARCH)
/vm/
$(OS)
_
$(Platform_arch_model)
.ad
)
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
else
SOURCES.AD
=
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/os_cpu/
$(OS)
_
$(ARCH)
/vm/
$(OS)
_
$(Platform_arch_model)
.ad
)
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch)
.ad
)
endif
EXEC
=
$(OUTDIR)
/adlc
...
...
make/solaris/makefiles/adlc.make
浏览文件 @
3686ad20
...
...
@@ -42,13 +42,11 @@ SOURCE.AD = $(OUTDIR)/$(OS)_$(Platform_arch_model).ad
ifeq
("${Platform_arch_model}", "${Platform_arch}")
SOURCES.AD
=
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/os_cpu/
$(OS)
_
$(ARCH)
/vm/
$(OS)
_
$(Platform_arch_model)
.ad
)
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
else
SOURCES.AD
=
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch_model)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch)
.ad
)
\
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/os_cpu/
$(OS)
_
$(ARCH)
/vm/
$(OS)
_
$(Platform_arch_model)
.ad
)
$(
call
altsrc-replace,
$(HS_COMMON_SRC)
/cpu/
$(ARCH)
/vm/
$(Platform_arch)
.ad
)
endif
EXEC
=
$(OUTDIR)
/adlc
...
...
make/windows/makefiles/adlc.make
浏览文件 @
3686ad20
...
...
@@ -55,13 +55,11 @@ CXX_INCLUDE_DIRS=\
!if
"$(Platform_arch_model)"
==
"$(Platform_arch)"
SOURCES_AD
=
\
$(WorkSpace)
/src/cpu/
$(Platform_arch)
/vm/
$(Platform_arch_model)
.ad
\
$(WorkSpace)
/src/os_cpu/windows_
$(Platform_arch)
/vm/windows_
$(Platform_arch_model)
.ad
$(WorkSpace)
/src/cpu/
$(Platform_arch)
/vm/
$(Platform_arch_model)
.ad
!
else
SOURCES_AD
=
\
$(WorkSpace)
/src/cpu/
$(Platform_arch)
/vm/
$(Platform_arch_model)
.ad
\
$(WorkSpace)
/src/cpu/
$(Platform_arch)
/vm/
$(Platform_arch)
.ad
\
$(WorkSpace)
/src/os_cpu/windows_
$(Platform_arch)
/vm/windows_
$(Platform_arch_model)
.ad
$(WorkSpace)
/src/cpu/
$(Platform_arch)
/vm/
$(Platform_arch)
.ad
!
endif
# NOTE! If you add any files here, you must also update GENERATED_NAMES_IN_DIR
...
...
src/os_cpu/bsd_x86/vm/bsd_x86_32.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 1999, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// X86 Bsd Architecture Description File
src/os_cpu/bsd_x86/vm/bsd_x86_64.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 2003, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// AMD64 Bsd Architecture Description File
//----------OS-DEPENDENT ENCODING BLOCK----------------------------------------
// This block specifies the encoding classes used by the compiler to
// output byte streams. Encoding classes generate functions which are
// called by Machine Instruction Nodes in order to generate the bit
// encoding of the instruction. Operands specify their base encoding
// interface with the interface keyword. There are currently
// supported four interfaces, REG_INTER, CONST_INTER, MEMORY_INTER, &
// COND_INTER. REG_INTER causes an operand to generate a function
// which returns its register number when queried. CONST_INTER causes
// an operand to generate a function which returns the value of the
// constant when queried. MEMORY_INTER causes an operand to generate
// four functions which return the Base Register, the Index Register,
// the Scale Value, and the Offset Value of the operand when queried.
// COND_INTER causes an operand to generate six functions which return
// the encoding code (ie - encoding bits for the instruction)
// associated with each basic boolean condition for a conditional
// instruction. Instructions specify two basic values for encoding.
// They use the ins_encode keyword to specify their encoding class
// (which must be one of the class names specified in the encoding
// block), and they use the opcode keyword to specify, in order, their
// primary, secondary, and tertiary opcode. Only the opcode sections
// which a particular instruction needs for encoding need to be
// specified.
encode %{
// Build emit functions for each basic byte or larger field in the intel
// encoding scheme (opcode, rm, sib, immediate), and call them from C++
// code in the enc_class source block. Emit functions will live in the
// main source block for now. In future, we can generalize this by
// adding a syntax that specifies the sizes of fields in an order,
// so that the adlc can build the emit functions automagically
%}
// Platform dependent source
source %{
%}
src/os_cpu/linux_x86/vm/linux_x86_32.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 1999, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// X86 Linux Architecture Description File
src/os_cpu/linux_x86/vm/linux_x86_64.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 2003, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// AMD64 Linux Architecture Description File
//----------OS-DEPENDENT ENCODING BLOCK----------------------------------------
// This block specifies the encoding classes used by the compiler to
// output byte streams. Encoding classes generate functions which are
// called by Machine Instruction Nodes in order to generate the bit
// encoding of the instruction. Operands specify their base encoding
// interface with the interface keyword. There are currently
// supported four interfaces, REG_INTER, CONST_INTER, MEMORY_INTER, &
// COND_INTER. REG_INTER causes an operand to generate a function
// which returns its register number when queried. CONST_INTER causes
// an operand to generate a function which returns the value of the
// constant when queried. MEMORY_INTER causes an operand to generate
// four functions which return the Base Register, the Index Register,
// the Scale Value, and the Offset Value of the operand when queried.
// COND_INTER causes an operand to generate six functions which return
// the encoding code (ie - encoding bits for the instruction)
// associated with each basic boolean condition for a conditional
// instruction. Instructions specify two basic values for encoding.
// They use the ins_encode keyword to specify their encoding class
// (which must be one of the class names specified in the encoding
// block), and they use the opcode keyword to specify, in order, their
// primary, secondary, and tertiary opcode. Only the opcode sections
// which a particular instruction needs for encoding need to be
// specified.
encode %{
// Build emit functions for each basic byte or larger field in the intel
// encoding scheme (opcode, rm, sib, immediate), and call them from C++
// code in the enc_class source block. Emit functions will live in the
// main source block for now. In future, we can generalize this by
// adding a syntax that specifies the sizes of fields in an order,
// so that the adlc can build the emit functions automagically
%}
// Platform dependent source
source %{
%}
src/os_cpu/solaris_sparc/vm/solaris_sparc.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 1999, 2007, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
//
// SPARC Solaris Architecture Description File
src/os_cpu/solaris_x86/vm/solaris_x86_32.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 1999, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// X86 Solaris Architecture Description File
src/os_cpu/solaris_x86/vm/solaris_x86_64.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// AMD64 Solaris Architecture Description File
//----------OS-DEPENDENT ENCODING BLOCK----------------------------------------
// This block specifies the encoding classes used by the compiler to
// output byte streams. Encoding classes generate functions which are
// called by Machine Instruction Nodes in order to generate the bit
// encoding of the instruction. Operands specify their base encoding
// interface with the interface keyword. There are currently
// supported four interfaces, REG_INTER, CONST_INTER, MEMORY_INTER, &
// COND_INTER. REG_INTER causes an operand to generate a function
// which returns its register number when queried. CONST_INTER causes
// an operand to generate a function which returns the value of the
// constant when queried. MEMORY_INTER causes an operand to generate
// four functions which return the Base Register, the Index Register,
// the Scale Value, and the Offset Value of the operand when queried.
// COND_INTER causes an operand to generate six functions which return
// the encoding code (ie - encoding bits for the instruction)
// associated with each basic boolean condition for a conditional
// instruction. Instructions specify two basic values for encoding.
// They use the ins_encode keyword to specify their encoding class
// (which must be one of the class names specified in the encoding
// block), and they use the opcode keyword to specify, in order, their
// primary, secondary, and tertiary opcode. Only the opcode sections
// which a particular instruction needs for encoding need to be
// specified.
encode %{
// Build emit functions for each basic byte or larger field in the intel
// encoding scheme (opcode, rm, sib, immediate), and call them from C++
// code in the enc_class source block. Emit functions will live in the
// main source block for now. In future, we can generalize this by
// adding a syntax that specifies the sizes of fields in an order,
// so that the adlc can build the emit functions automagically
%}
// Platform dependent source
source %{
%}
src/os_cpu/windows_x86/vm/windows_x86_32.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 1999, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// X86 Win32 Architecture Description File
src/os_cpu/windows_x86/vm/windows_x86_64.ad
已删除
100644 → 0
浏览文件 @
e589070a
//
// Copyright (c) 2003, 2012, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
// under the terms of the GNU General Public License version 2 only, as
// published by the Free Software Foundation.
//
// This code is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
// version 2 for more details (a copy is included in the LICENSE file that
// accompanied this code).
//
// You should have received a copy of the GNU General Public License version
// 2 along with this work; if not, write to the Free Software Foundation,
// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
//
// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
// or visit www.oracle.com if you need additional information or have any
// questions.
//
//
// AMD64 Win32 Architecture Description File
//----------OS-DEPENDENT ENCODING BLOCK-----------------------------------------------------
// This block specifies the encoding classes used by the compiler to output
// byte streams. Encoding classes generate functions which are called by
// Machine Instruction Nodes in order to generate the bit encoding of the
// instruction. Operands specify their base encoding interface with the
// interface keyword. There are currently supported four interfaces,
// REG_INTER, CONST_INTER, MEMORY_INTER, & COND_INTER. REG_INTER causes an
// operand to generate a function which returns its register number when
// queried. CONST_INTER causes an operand to generate a function which
// returns the value of the constant when queried. MEMORY_INTER causes an
// operand to generate four functions which return the Base Register, the
// Index Register, the Scale Value, and the Offset Value of the operand when
// queried. COND_INTER causes an operand to generate six functions which
// return the encoding code (ie - encoding bits for the instruction)
// associated with each basic boolean condition for a conditional instruction.
// Instructions specify two basic values for encoding. They use the
// ins_encode keyword to specify their encoding class (which must be one of
// the class names specified in the encoding block), and they use the
// opcode keyword to specify, in order, their primary, secondary, and
// tertiary opcode. Only the opcode sections which a particular instruction
// needs for encoding need to be specified.
encode %{
// Build emit functions for each basic byte or larger field in the intel
// encoding scheme (opcode, rm, sib, immediate), and call them from C++
// code in the enc_class source block. Emit functions will live in the
// main source block for now. In future, we can generalize this by
// adding a syntax that specifies the sizes of fields in an order,
// so that the adlc can build the emit functions automagically
%}
// Platform dependent source
source %{
%}
src/share/vm/opto/block.cpp
浏览文件 @
3686ad20
...
...
@@ -221,7 +221,7 @@ bool Block::has_uncommon_code() const {
//------------------------------is_uncommon------------------------------------
// True if block is low enough frequency or guarded by a test which
// mostly does not go here.
bool
Block
::
is_uncommon
(
Block_Array
&
bbs
)
const
{
bool
Block
::
is_uncommon
(
PhaseCFG
*
cfg
)
const
{
// Initial blocks must never be moved, so are never uncommon.
if
(
head
()
->
is_Root
()
||
head
()
->
is_Start
())
return
false
;
...
...
@@ -238,7 +238,7 @@ bool Block::is_uncommon( Block_Array &bbs ) const {
uint
uncommon_for_freq_preds
=
0
;
for
(
uint
i
=
1
;
i
<
num_preds
();
i
++
)
{
Block
*
guard
=
bbs
[
pred
(
i
)
->
_idx
]
;
Block
*
guard
=
cfg
->
get_block_for_node
(
pred
(
i
))
;
// Check to see if this block follows its guard 1 time out of 10000
// or less.
//
...
...
@@ -285,11 +285,11 @@ void Block::dump_bidx(const Block* orig, outputStream* st) const {
}
}
void
Block
::
dump_pred
(
const
Block_Array
*
bbs
,
Block
*
orig
,
outputStream
*
st
)
const
{
void
Block
::
dump_pred
(
const
PhaseCFG
*
cfg
,
Block
*
orig
,
outputStream
*
st
)
const
{
if
(
is_connector
())
{
for
(
uint
i
=
1
;
i
<
num_preds
();
i
++
)
{
Block
*
p
=
((
*
bbs
)[
pred
(
i
)
->
_idx
]
);
p
->
dump_pred
(
bbs
,
orig
,
st
);
Block
*
p
=
cfg
->
get_block_for_node
(
pred
(
i
)
);
p
->
dump_pred
(
cfg
,
orig
,
st
);
}
}
else
{
dump_bidx
(
orig
,
st
);
...
...
@@ -297,7 +297,7 @@ void Block::dump_pred(const Block_Array *bbs, Block* orig, outputStream* st) con
}
}
void
Block
::
dump_head
(
const
Block_Array
*
bbs
,
outputStream
*
st
)
const
{
void
Block
::
dump_head
(
const
PhaseCFG
*
cfg
,
outputStream
*
st
)
const
{
// Print the basic block
dump_bidx
(
this
,
st
);
st
->
print
(
": #
\t
"
);
...
...
@@ -311,26 +311,28 @@ void Block::dump_head( const Block_Array *bbs, outputStream* st ) const {
if
(
head
()
->
is_block_start
()
)
{
for
(
uint
i
=
1
;
i
<
num_preds
();
i
++
)
{
Node
*
s
=
pred
(
i
);
if
(
bbs
)
{
Block
*
p
=
(
*
bbs
)[
s
->
_idx
]
;
p
->
dump_pred
(
bbs
,
p
,
st
);
if
(
cfg
!=
NULL
)
{
Block
*
p
=
cfg
->
get_block_for_node
(
s
)
;
p
->
dump_pred
(
cfg
,
p
,
st
);
}
else
{
while
(
!
s
->
is_block_start
())
s
=
s
->
in
(
0
);
st
->
print
(
"N%d "
,
s
->
_idx
);
}
}
}
else
}
else
{
st
->
print
(
"BLOCK HEAD IS JUNK "
);
}
// Print loop, if any
const
Block
*
bhead
=
this
;
// Head of self-loop
Node
*
bh
=
bhead
->
head
();
if
(
bbs
&&
bh
->
is_Loop
()
&&
!
head
()
->
is_Root
()
)
{
if
((
cfg
!=
NULL
)
&&
bh
->
is_Loop
()
&&
!
head
()
->
is_Root
())
{
LoopNode
*
loop
=
bh
->
as_Loop
();
const
Block
*
bx
=
(
*
bbs
)[
loop
->
in
(
LoopNode
::
LoopBackControl
)
->
_idx
]
;
const
Block
*
bx
=
cfg
->
get_block_for_node
(
loop
->
in
(
LoopNode
::
LoopBackControl
))
;
while
(
bx
->
is_connector
())
{
bx
=
(
*
bbs
)[
bx
->
pred
(
1
)
->
_idx
]
;
bx
=
cfg
->
get_block_for_node
(
bx
->
pred
(
1
))
;
}
st
->
print
(
"
\t
Loop: B%d-B%d "
,
bhead
->
_pre_order
,
bx
->
_pre_order
);
// Dump any loop-specific bits, especially for CountedLoops.
...
...
@@ -349,29 +351,32 @@ void Block::dump_head( const Block_Array *bbs, outputStream* st ) const {
st
->
print_cr
(
""
);
}
void
Block
::
dump
()
const
{
dump
(
NULL
);
}
void
Block
::
dump
()
const
{
dump
(
NULL
);
}
void
Block
::
dump
(
const
Block_Array
*
bbs
)
const
{
dump_head
(
bbs
);
uint
cnt
=
_nodes
.
size
();
for
(
uint
i
=
0
;
i
<
cnt
;
i
++
)
void
Block
::
dump
(
const
PhaseCFG
*
cfg
)
const
{
dump_head
(
cfg
);
for
(
uint
i
=
0
;
i
<
_nodes
.
size
();
i
++
)
{
_nodes
[
i
]
->
dump
();
}
tty
->
print
(
"
\n
"
);
}
#endif
//=============================================================================
//------------------------------PhaseCFG---------------------------------------
PhaseCFG
::
PhaseCFG
(
Arena
*
a
,
RootNode
*
r
,
Matcher
&
m
)
:
Phase
(
CFG
),
_bbs
(
a
),
_root
(
r
),
_node_latency
(
NULL
)
PhaseCFG
::
PhaseCFG
(
Arena
*
arena
,
RootNode
*
root
,
Matcher
&
matcher
)
:
Phase
(
CFG
)
,
_block_arena
(
arena
)
,
_node_to_block_mapping
(
arena
)
,
_root
(
root
)
,
_node_latency
(
NULL
)
#ifndef PRODUCT
,
_trace_opto_pipelining
(
TraceOptoPipelining
||
C
->
method_has_option
(
"TraceOptoPipelining"
))
,
_trace_opto_pipelining
(
TraceOptoPipelining
||
C
->
method_has_option
(
"TraceOptoPipelining"
))
#endif
#ifdef ASSERT
,
_raw_oops
(
a
)
,
_raw_oops
(
aren
a
)
#endif
{
ResourceMark
rm
;
...
...
@@ -380,13 +385,13 @@ PhaseCFG::PhaseCFG( Arena *a, RootNode *r, Matcher &m ) :
// Node on demand.
Node
*
x
=
new
(
C
)
GotoNode
(
NULL
);
x
->
init_req
(
0
,
x
);
_goto
=
m
.
match_tree
(
x
);
_goto
=
m
atcher
.
match_tree
(
x
);
assert
(
_goto
!=
NULL
,
""
);
_goto
->
set_req
(
0
,
_goto
);
// Build the CFG in Reverse Post Order
_num_blocks
=
build_cfg
();
_broot
=
_bbs
[
_root
->
_idx
]
;
_broot
=
get_block_for_node
(
_root
)
;
}
//------------------------------build_cfg--------------------------------------
...
...
@@ -440,9 +445,9 @@ uint PhaseCFG::build_cfg() {
// 'p' now points to the start of this basic block
// Put self in array of basic blocks
Block
*
bb
=
new
(
_b
bs
.
_arena
)
Block
(
_bbs
.
_arena
,
p
);
_bbs
.
map
(
p
->
_idx
,
bb
);
_bbs
.
map
(
x
->
_idx
,
bb
);
Block
*
bb
=
new
(
_b
lock_arena
)
Block
(
_block_arena
,
p
);
map_node_to_block
(
p
,
bb
);
map_node_to_block
(
x
,
bb
);
if
(
x
!=
p
)
{
// Only for root is x == p
bb
->
_nodes
.
push
((
Node
*
)
x
);
}
...
...
@@ -473,16 +478,16 @@ uint PhaseCFG::build_cfg() {
// Check if it the fist node pushed on stack at the beginning.
if
(
idx
==
0
)
break
;
// end of the build
// Find predecessor basic block
Block
*
pb
=
_bbs
[
x
->
_idx
]
;
Block
*
pb
=
get_block_for_node
(
x
)
;
// Insert into nodes array, if not already there
if
(
!
_bbs
.
lookup
(
proj
->
_idx
)
)
{
if
(
!
has_block
(
proj
)
)
{
assert
(
x
!=
proj
,
""
);
// Map basic block of projection
_bbs
.
map
(
proj
->
_idx
,
pb
);
map_node_to_block
(
proj
,
pb
);
pb
->
_nodes
.
push
(
proj
);
}
// Insert self as a child of my predecessor block
pb
->
_succs
.
map
(
pb
->
_num_succs
++
,
_bbs
[
np
->
_idx
]
);
pb
->
_succs
.
map
(
pb
->
_num_succs
++
,
get_block_for_node
(
np
)
);
assert
(
pb
->
_nodes
[
pb
->
_nodes
.
size
()
-
pb
->
_num_succs
]
->
is_block_proj
(),
"too many control users, not a CFG?"
);
}
...
...
@@ -511,15 +516,15 @@ void PhaseCFG::insert_goto_at(uint block_no, uint succ_no) {
RegionNode
*
region
=
new
(
C
)
RegionNode
(
2
);
region
->
init_req
(
1
,
proj
);
// setup corresponding basic block
Block
*
block
=
new
(
_b
bs
.
_arena
)
Block
(
_bbs
.
_arena
,
region
);
_bbs
.
map
(
region
->
_idx
,
block
);
Block
*
block
=
new
(
_b
lock_arena
)
Block
(
_block
_arena
,
region
);
map_node_to_block
(
region
,
block
);
C
->
regalloc
()
->
set_bad
(
region
->
_idx
);
// add a goto node
Node
*
gto
=
_goto
->
clone
();
// get a new goto node
gto
->
set_req
(
0
,
region
);
// add it to the basic block
block
->
_nodes
.
push
(
gto
);
_bbs
.
map
(
gto
->
_idx
,
block
);
map_node_to_block
(
gto
,
block
);
C
->
regalloc
()
->
set_bad
(
gto
->
_idx
);
// hook up successor block
block
->
_succs
.
map
(
block
->
_num_succs
++
,
out
);
...
...
@@ -570,7 +575,7 @@ void PhaseCFG::convert_NeverBranch_to_Goto(Block *b) {
gto
->
set_req
(
0
,
b
->
head
());
Node
*
bp
=
b
->
_nodes
[
end_idx
];
b
->
_nodes
.
map
(
end_idx
,
gto
);
// Slam over NeverBranch
_bbs
.
map
(
gto
->
_idx
,
b
);
map_node_to_block
(
gto
,
b
);
C
->
regalloc
()
->
set_bad
(
gto
->
_idx
);
b
->
_nodes
.
pop
();
// Yank projections
b
->
_nodes
.
pop
();
// Yank projections
...
...
@@ -613,7 +618,7 @@ bool PhaseCFG::move_to_next(Block* bx, uint b_index) {
// If the previous block conditionally falls into bx, return false,
// because moving bx will create an extra jump.
for
(
uint
k
=
1
;
k
<
bx
->
num_preds
();
k
++
)
{
Block
*
pred
=
_bbs
[
bx
->
pred
(
k
)
->
_idx
]
;
Block
*
pred
=
get_block_for_node
(
bx
->
pred
(
k
))
;
if
(
pred
==
_blocks
[
bx_index
-
1
])
{
if
(
pred
->
_num_succs
!=
1
)
{
return
false
;
...
...
@@ -682,7 +687,7 @@ void PhaseCFG::remove_empty() {
// Look for uncommon blocks and move to end.
if
(
!
C
->
do_freq_based_layout
())
{
if
(
b
->
is_uncommon
(
_bbs
)
)
{
if
(
b
->
is_uncommon
(
this
)
)
{
move_to_end
(
b
,
i
);
last
--
;
// No longer check for being uncommon!
if
(
no_flip_branch
(
b
)
)
{
// Fall-thru case must follow?
...
...
@@ -870,28 +875,31 @@ void PhaseCFG::_dump_cfg( const Node *end, VectorSet &visited ) const {
}
while
(
!
p
->
is_block_start
()
);
// Recursively visit
for
(
uint
i
=
1
;
i
<
p
->
req
();
i
++
)
_dump_cfg
(
p
->
in
(
i
),
visited
);
for
(
uint
i
=
1
;
i
<
p
->
req
();
i
++
)
{
_dump_cfg
(
p
->
in
(
i
),
visited
);
}
// Dump the block
_bbs
[
p
->
_idx
]
->
dump
(
&
_bb
s
);
get_block_for_node
(
p
)
->
dump
(
thi
s
);
}
void
PhaseCFG
::
dump
(
)
const
{
tty
->
print
(
"
\n
--- CFG --- %d BBs
\n
"
,
_num_blocks
);
if
(
_blocks
.
size
()
)
{
// Did we do basic-block layout?
for
(
uint
i
=
0
;
i
<
_num_blocks
;
i
++
)
_blocks
[
i
]
->
dump
(
&
_bbs
);
if
(
_blocks
.
size
())
{
// Did we do basic-block layout?
for
(
uint
i
=
0
;
i
<
_num_blocks
;
i
++
)
{
_blocks
[
i
]
->
dump
(
this
);
}
}
else
{
// Else do it with a DFS
VectorSet
visited
(
_b
bs
.
_arena
);
VectorSet
visited
(
_b
lock
_arena
);
_dump_cfg
(
_root
,
visited
);
}
}
void
PhaseCFG
::
dump_headers
()
{
for
(
uint
i
=
0
;
i
<
_num_blocks
;
i
++
)
{
if
(
_blocks
[
i
]
==
NULL
)
continue
;
_blocks
[
i
]
->
dump_head
(
&
_bbs
);
if
(
_blocks
[
i
])
{
_blocks
[
i
]
->
dump_head
(
this
);
}
}
}
...
...
@@ -904,7 +912,7 @@ void PhaseCFG::verify( ) const {
uint
j
;
for
(
j
=
0
;
j
<
cnt
;
j
++
)
{
Node
*
n
=
b
->
_nodes
[
j
];
assert
(
_bbs
[
n
->
_idx
]
==
b
,
""
);
assert
(
get_block_for_node
(
n
)
==
b
,
""
);
if
(
j
>=
1
&&
n
->
is_Mach
()
&&
n
->
as_Mach
()
->
ideal_Opcode
()
==
Op_CreateEx
)
{
assert
(
j
==
1
||
b
->
_nodes
[
j
-
1
]
->
is_Phi
(),
...
...
@@ -913,13 +921,12 @@ void PhaseCFG::verify( ) const {
for
(
uint
k
=
0
;
k
<
n
->
req
();
k
++
)
{
Node
*
def
=
n
->
in
(
k
);
if
(
def
&&
def
!=
n
)
{
assert
(
_bbs
[
def
->
_idx
]
||
def
->
is_Con
(),
"must have block; constants for debug info ok"
);
assert
(
get_block_for_node
(
def
)
||
def
->
is_Con
(),
"must have block; constants for debug info ok"
);
// Verify that instructions in the block is in correct order.
// Uses must follow their definition if they are at the same block.
// Mostly done to check that MachSpillCopy nodes are placed correctly
// when CreateEx node is moved in build_ifg_physical().
if
(
_bbs
[
def
->
_idx
]
==
b
&&
if
(
get_block_for_node
(
def
)
==
b
&&
!
(
b
->
head
()
->
is_Loop
()
&&
n
->
is_Phi
())
&&
// See (+++) comment in reg_split.cpp
!
(
n
->
jvms
()
!=
NULL
&&
n
->
jvms
()
->
is_monitor_use
(
k
)))
{
...
...
src/share/vm/opto/block.hpp
浏览文件 @
3686ad20
...
...
@@ -48,13 +48,12 @@ class Block_Array : public ResourceObj {
friend
class
VMStructs
;
uint
_size
;
// allocated size, as opposed to formal limit
debug_only
(
uint
_limit
;)
// limit to formal domain
Arena
*
_arena
;
// Arena to allocate in
protected:
Block
**
_blocks
;
void
grow
(
uint
i
);
// Grow array node to fit
public:
Arena
*
_arena
;
// Arena to allocate in
Block_Array
(
Arena
*
a
)
:
_arena
(
a
),
_size
(
OptoBlockListSize
)
{
debug_only
(
_limit
=
0
);
_blocks
=
NEW_ARENA_ARRAY
(
a
,
Block
*
,
OptoBlockListSize
);
...
...
@@ -77,7 +76,7 @@ class Block_List : public Block_Array {
public:
uint
_cnt
;
Block_List
()
:
Block_Array
(
Thread
::
current
()
->
resource_area
()),
_cnt
(
0
)
{}
void
push
(
Block
*
b
)
{
map
(
_cnt
++
,
b
);
}
void
push
(
Block
*
b
)
{
map
(
_cnt
++
,
b
);
}
Block
*
pop
()
{
return
_blocks
[
--
_cnt
];
}
Block
*
rpop
()
{
Block
*
b
=
_blocks
[
0
];
_blocks
[
0
]
=
_blocks
[
--
_cnt
];
return
b
;}
void
remove
(
uint
i
);
...
...
@@ -284,15 +283,15 @@ class Block : public CFGElement {
// helper function that adds caller save registers to MachProjNode
void
add_call_kills
(
MachProjNode
*
proj
,
RegMask
&
regs
,
const
char
*
save_policy
,
bool
exclude_soe
);
// Schedule a call next in the block
uint
sched_call
(
Matcher
&
matcher
,
Block_Array
&
bbs
,
uint
node_cnt
,
Node_List
&
worklist
,
GrowableArray
<
int
>
&
ready_cnt
,
MachCallNode
*
mcall
,
VectorSet
&
next_call
);
uint
sched_call
(
Matcher
&
matcher
,
PhaseCFG
*
cfg
,
uint
node_cnt
,
Node_List
&
worklist
,
GrowableArray
<
int
>
&
ready_cnt
,
MachCallNode
*
mcall
,
VectorSet
&
next_call
);
// Perform basic-block local scheduling
Node
*
select
(
PhaseCFG
*
cfg
,
Node_List
&
worklist
,
GrowableArray
<
int
>
&
ready_cnt
,
VectorSet
&
next_call
,
uint
sched_slot
);
void
set_next_call
(
Node
*
n
,
VectorSet
&
next_call
,
Block_Array
&
bbs
);
void
needed_for_next_call
(
Node
*
this_call
,
VectorSet
&
next_call
,
Block_Array
&
bbs
);
void
set_next_call
(
Node
*
n
,
VectorSet
&
next_call
,
PhaseCFG
*
cfg
);
void
needed_for_next_call
(
Node
*
this_call
,
VectorSet
&
next_call
,
PhaseCFG
*
cfg
);
bool
schedule_local
(
PhaseCFG
*
cfg
,
Matcher
&
m
,
GrowableArray
<
int
>
&
ready_cnt
,
VectorSet
&
next_call
);
// Cleanup if any code lands between a Call and his Catch
void
call_catch_cleanup
(
Block_Array
&
bbs
,
Compile
*
C
);
void
call_catch_cleanup
(
PhaseCFG
*
cfg
,
Compile
*
C
);
// Detect implicit-null-check opportunities. Basically, find NULL checks
// with suitable memory ops nearby. Use the memory op to do the NULL check.
// I can generate a memory op if there is not one nearby.
...
...
@@ -331,15 +330,15 @@ class Block : public CFGElement {
// Use frequency calculations and code shape to predict if the block
// is uncommon.
bool
is_uncommon
(
Block_Array
&
bbs
)
const
;
bool
is_uncommon
(
PhaseCFG
*
cfg
)
const
;
#ifndef PRODUCT
// Debugging print of basic block
void
dump_bidx
(
const
Block
*
orig
,
outputStream
*
st
=
tty
)
const
;
void
dump_pred
(
const
Block_Array
*
bbs
,
Block
*
orig
,
outputStream
*
st
=
tty
)
const
;
void
dump_head
(
const
Block_Array
*
bbs
,
outputStream
*
st
=
tty
)
const
;
void
dump_pred
(
const
PhaseCFG
*
cfg
,
Block
*
orig
,
outputStream
*
st
=
tty
)
const
;
void
dump_head
(
const
PhaseCFG
*
cfg
,
outputStream
*
st
=
tty
)
const
;
void
dump
()
const
;
void
dump
(
const
Block_Array
*
bbs
)
const
;
void
dump
(
const
PhaseCFG
*
cfg
)
const
;
#endif
};
...
...
@@ -349,6 +348,12 @@ class Block : public CFGElement {
class
PhaseCFG
:
public
Phase
{
friend
class
VMStructs
;
private:
// Arena for the blocks to be stored in
Arena
*
_block_arena
;
// Map nodes to owning basic block
Block_Array
_node_to_block_mapping
;
// Build a proper looking cfg. Return count of basic blocks
uint
build_cfg
();
...
...
@@ -371,22 +376,42 @@ class PhaseCFG : public Phase {
Block
*
insert_anti_dependences
(
Block
*
LCA
,
Node
*
load
,
bool
verify
=
false
);
void
verify_anti_dependences
(
Block
*
LCA
,
Node
*
load
)
{
assert
(
LCA
==
_bbs
[
load
->
_idx
]
,
"should already be scheduled"
);
assert
(
LCA
==
get_block_for_node
(
load
)
,
"should already be scheduled"
);
insert_anti_dependences
(
LCA
,
load
,
true
);
}
public:
PhaseCFG
(
Arena
*
a
,
RootNode
*
r
,
Matcher
&
m
);
PhaseCFG
(
Arena
*
arena
,
RootNode
*
root
,
Matcher
&
matcher
);
uint
_num_blocks
;
// Count of basic blocks
Block_List
_blocks
;
// List of basic blocks
RootNode
*
_root
;
// Root of whole program
Block_Array
_bbs
;
// Map Nodes to owning Basic Block
Block
*
_broot
;
// Basic block of root
uint
_rpo_ctr
;
CFGLoop
*
_root_loop
;
float
_outer_loop_freq
;
// Outmost loop frequency
// set which block this node should reside in
void
map_node_to_block
(
const
Node
*
node
,
Block
*
block
)
{
_node_to_block_mapping
.
map
(
node
->
_idx
,
block
);
}
// removes the mapping from a node to a block
void
unmap_node_from_block
(
const
Node
*
node
)
{
_node_to_block_mapping
.
map
(
node
->
_idx
,
NULL
);
}
// get the block in which this node resides
Block
*
get_block_for_node
(
const
Node
*
node
)
const
{
return
_node_to_block_mapping
[
node
->
_idx
];
}
// does this node reside in a block; return true
bool
has_block
(
const
Node
*
node
)
const
{
return
(
_node_to_block_mapping
.
lookup
(
node
->
_idx
)
!=
NULL
);
}
// Per node latency estimation, valid only during GCM
GrowableArray
<
uint
>
*
_node_latency
;
...
...
@@ -405,7 +430,7 @@ class PhaseCFG : public Phase {
void
Estimate_Block_Frequency
();
// Global Code Motion. See Click's PLDI95 paper. Place Nodes in specific
// basic blocks; i.e. _
bbs
now maps _idx for all Nodes to some Block.
// basic blocks; i.e. _
node_to_block_mapping
now maps _idx for all Nodes to some Block.
void
GlobalCodeMotion
(
Matcher
&
m
,
uint
unique
,
Node_List
&
proj_list
);
// Compute the (backwards) latency of a node from the uses
...
...
@@ -454,7 +479,7 @@ class PhaseCFG : public Phase {
// Insert a node into a block, and update the _bbs
void
insert
(
Block
*
b
,
uint
idx
,
Node
*
n
)
{
b
->
_nodes
.
insert
(
idx
,
n
);
_bbs
.
map
(
n
->
_idx
,
b
);
map_node_to_block
(
n
,
b
);
}
#ifndef PRODUCT
...
...
@@ -543,7 +568,7 @@ class CFGLoop : public CFGElement {
_child
(
NULL
),
_exit_prob
(
1.0
f
)
{}
CFGLoop
*
parent
()
{
return
_parent
;
}
void
push_pred
(
Block
*
blk
,
int
i
,
Block_List
&
worklist
,
Block_Array
&
node_to_blk
);
void
push_pred
(
Block
*
blk
,
int
i
,
Block_List
&
worklist
,
PhaseCFG
*
cfg
);
void
add_member
(
CFGElement
*
s
)
{
_members
.
push
(
s
);
}
void
add_nested_loop
(
CFGLoop
*
cl
);
Block
*
head
()
{
...
...
src/share/vm/opto/buildOopMap.cpp
浏览文件 @
3686ad20
...
...
@@ -426,14 +426,16 @@ static void do_liveness( PhaseRegAlloc *regalloc, PhaseCFG *cfg, Block_List *wor
}
memset
(
live
,
0
,
cfg
->
_num_blocks
*
(
max_reg_ints
<<
LogBytesPerInt
)
);
// Push preds onto worklist
for
(
uint
i
=
1
;
i
<
root
->
req
();
i
++
)
worklist
->
push
(
cfg
->
_bbs
[
root
->
in
(
i
)
->
_idx
]);
for
(
uint
i
=
1
;
i
<
root
->
req
();
i
++
)
{
Block
*
block
=
cfg
->
get_block_for_node
(
root
->
in
(
i
));
worklist
->
push
(
block
);
}
// ZKM.jar includes tiny infinite loops which are unreached from below.
// If we missed any blocks, we'll retry here after pushing all missed
// blocks on the worklist. Normally this outer loop never trips more
// than once.
while
(
1
)
{
while
(
1
)
{
while
(
worklist
->
size
()
)
{
// Standard worklist algorithm
Block
*
b
=
worklist
->
rpop
();
...
...
@@ -537,8 +539,10 @@ static void do_liveness( PhaseRegAlloc *regalloc, PhaseCFG *cfg, Block_List *wor
for
(
l
=
0
;
l
<
max_reg_ints
;
l
++
)
old_live
[
l
]
=
tmp_live
[
l
];
// Push preds onto worklist
for
(
l
=
1
;
l
<
(
int
)
b
->
num_preds
();
l
++
)
worklist
->
push
(
cfg
->
_bbs
[
b
->
pred
(
l
)
->
_idx
]);
for
(
l
=
1
;
l
<
(
int
)
b
->
num_preds
();
l
++
)
{
Block
*
block
=
cfg
->
get_block_for_node
(
b
->
pred
(
l
));
worklist
->
push
(
block
);
}
}
}
...
...
@@ -629,10 +633,9 @@ void Compile::BuildOopMaps() {
// pred to this block. Otherwise we have to grab a new OopFlow.
OopFlow
*
flow
=
NULL
;
// Flag for finding optimized flow
Block
*
pred
=
(
Block
*
)
0xdeadbeef
;
uint
j
;
// Scan this block's preds to find a done predecessor
for
(
j
=
1
;
j
<
b
->
num_preds
();
j
++
)
{
Block
*
p
=
_cfg
->
_bbs
[
b
->
pred
(
j
)
->
_idx
]
;
for
(
uint
j
=
1
;
j
<
b
->
num_preds
();
j
++
)
{
Block
*
p
=
_cfg
->
get_block_for_node
(
b
->
pred
(
j
))
;
OopFlow
*
p_flow
=
flows
[
p
->
_pre_order
];
if
(
p_flow
)
{
// Predecessor is done
assert
(
p_flow
->
_b
==
p
,
"cross check"
);
...
...
src/share/vm/opto/c2_globals.hpp
浏览文件 @
3686ad20
...
...
@@ -179,6 +179,9 @@
product_pd(intx, LoopUnrollLimit, \
"Unroll loop bodies with node count less than this") \
\
product(intx, LoopMaxUnroll, 16, \
"Maximum number of unrolls for main loop") \
\
product(intx, LoopUnrollMin, 4, \
"Minimum number of unroll loop bodies before checking progress" \
"of rounds of unroll,optimize,..") \
...
...
src/share/vm/opto/chaitin.cpp
浏览文件 @
3686ad20
...
...
@@ -295,7 +295,7 @@ void PhaseChaitin::new_lrg(const Node *x, uint lrg) {
bool
PhaseChaitin
::
clone_projs_shared
(
Block
*
b
,
uint
idx
,
Node
*
con
,
Node
*
copy
,
uint
max_lrg_id
)
{
Block
*
bcon
=
_cfg
.
_bbs
[
con
->
_idx
]
;
Block
*
bcon
=
_cfg
.
get_block_for_node
(
con
)
;
uint
cindex
=
bcon
->
find_node
(
con
);
Node
*
con_next
=
bcon
->
_nodes
[
cindex
+
1
];
if
(
con_next
->
in
(
0
)
!=
con
||
!
con_next
->
is_MachProj
())
{
...
...
@@ -306,7 +306,7 @@ bool PhaseChaitin::clone_projs_shared(Block *b, uint idx, Node *con, Node *copy,
Node
*
kills
=
con_next
->
clone
();
kills
->
set_req
(
0
,
copy
);
b
->
_nodes
.
insert
(
idx
,
kills
);
_cfg
.
_bbs
.
map
(
kills
->
_idx
,
b
);
_cfg
.
map_node_to_block
(
kills
,
b
);
new_lrg
(
kills
,
max_lrg_id
);
return
true
;
}
...
...
@@ -962,8 +962,7 @@ void PhaseChaitin::gather_lrg_masks( bool after_aggressive ) {
// AggressiveCoalesce. This effectively pre-virtual-splits
// around uncommon uses of common defs.
const
RegMask
&
rm
=
n
->
in_RegMask
(
k
);
if
(
!
after_aggressive
&&
_cfg
.
_bbs
[
n
->
in
(
k
)
->
_idx
]
->
_freq
>
1000
*
b
->
_freq
)
{
if
(
!
after_aggressive
&&
_cfg
.
get_block_for_node
(
n
->
in
(
k
))
->
_freq
>
1000
*
b
->
_freq
)
{
// Since we are BEFORE aggressive coalesce, leave the register
// mask untrimmed by the call. This encourages more coalescing.
// Later, AFTER aggressive, this live range will have to spill
...
...
@@ -1709,16 +1708,15 @@ Node *PhaseChaitin::find_base_for_derived( Node **derived_base_map, Node *derive
// set control to _root and place it into Start block
// (where top() node is placed).
base
->
init_req
(
0
,
_cfg
.
_root
);
Block
*
startb
=
_cfg
.
_bbs
[
C
->
top
()
->
_idx
]
;
Block
*
startb
=
_cfg
.
get_block_for_node
(
C
->
top
())
;
startb
->
_nodes
.
insert
(
startb
->
find_node
(
C
->
top
()),
base
);
_cfg
.
_bbs
.
map
(
base
->
_idx
,
startb
);
_cfg
.
map_node_to_block
(
base
,
startb
);
assert
(
_lrg_map
.
live_range_id
(
base
)
==
0
,
"should not have LRG yet"
);
}
if
(
_lrg_map
.
live_range_id
(
base
)
==
0
)
{
new_lrg
(
base
,
maxlrg
++
);
}
assert
(
base
->
in
(
0
)
==
_cfg
.
_root
&&
_cfg
.
_bbs
[
base
->
_idx
]
==
_cfg
.
_bbs
[
C
->
top
()
->
_idx
],
"base NULL should be shared"
);
assert
(
base
->
in
(
0
)
==
_cfg
.
_root
&&
_cfg
.
get_block_for_node
(
base
)
==
_cfg
.
get_block_for_node
(
C
->
top
()),
"base NULL should be shared"
);
derived_base_map
[
derived
->
_idx
]
=
base
;
return
base
;
}
...
...
@@ -1754,12 +1752,12 @@ Node *PhaseChaitin::find_base_for_derived( Node **derived_base_map, Node *derive
base
->
as_Phi
()
->
set_type
(
t
);
// Search the current block for an existing base-Phi
Block
*
b
=
_cfg
.
_bbs
[
derived
->
_idx
]
;
Block
*
b
=
_cfg
.
get_block_for_node
(
derived
)
;
for
(
i
=
1
;
i
<=
b
->
end_idx
();
i
++
)
{
// Search for matching Phi
Node
*
phi
=
b
->
_nodes
[
i
];
if
(
!
phi
->
is_Phi
()
)
{
// Found end of Phis with no match?
b
->
_nodes
.
insert
(
i
,
base
);
// Must insert created Phi here as base
_cfg
.
_bbs
.
map
(
base
->
_idx
,
b
);
_cfg
.
map_node_to_block
(
base
,
b
);
new_lrg
(
base
,
maxlrg
++
);
break
;
}
...
...
@@ -1815,8 +1813,8 @@ bool PhaseChaitin::stretch_base_pointer_live_ranges(ResourceArea *a) {
if
(
n
->
is_Mach
()
&&
n
->
as_Mach
()
->
ideal_Opcode
()
==
Op_CmpI
)
{
Node
*
phi
=
n
->
in
(
1
);
if
(
phi
->
is_Phi
()
&&
phi
->
as_Phi
()
->
region
()
->
is_Loop
()
)
{
Block
*
phi_block
=
_cfg
.
_bbs
[
phi
->
_idx
]
;
if
(
_cfg
.
_bbs
[
phi_block
->
pred
(
2
)
->
_idx
]
==
b
)
{
Block
*
phi_block
=
_cfg
.
get_block_for_node
(
phi
)
;
if
(
_cfg
.
get_block_for_node
(
phi_block
->
pred
(
2
))
==
b
)
{
const
RegMask
*
mask
=
C
->
matcher
()
->
idealreg2spillmask
[
Op_RegI
];
Node
*
spill
=
new
(
C
)
MachSpillCopyNode
(
phi
,
*
mask
,
*
mask
);
insert_proj
(
phi_block
,
1
,
spill
,
maxlrg
++
);
...
...
@@ -1870,7 +1868,7 @@ bool PhaseChaitin::stretch_base_pointer_live_ranges(ResourceArea *a) {
if
((
_lrg_map
.
live_range_id
(
base
)
>=
_lrg_map
.
max_lrg_id
()
||
// (Brand new base (hence not live) or
!
liveout
.
member
(
_lrg_map
.
live_range_id
(
base
)))
&&
// not live) AND
(
_lrg_map
.
live_range_id
(
base
)
>
0
)
&&
// not a constant
_cfg
.
_bbs
[
base
->
_idx
]
!=
b
)
{
// base not def'd in blk)
_cfg
.
get_block_for_node
(
base
)
!=
b
)
{
// base not def'd in blk)
// Base pointer is not currently live. Since I stretched
// the base pointer to here and it crosses basic-block
// boundaries, the global live info is now incorrect.
...
...
@@ -1993,8 +1991,8 @@ void PhaseChaitin::dump(const Node *n) const {
tty
->
print
(
"
\n
"
);
}
void
PhaseChaitin
::
dump
(
const
Block
*
b
)
const
{
b
->
dump_head
(
&
_cfg
.
_bbs
);
void
PhaseChaitin
::
dump
(
const
Block
*
b
)
const
{
b
->
dump_head
(
&
_cfg
);
// For all instructions
for
(
uint
j
=
0
;
j
<
b
->
_nodes
.
size
();
j
++
)
...
...
@@ -2299,7 +2297,7 @@ void PhaseChaitin::dump_lrg( uint lidx, bool defs_only ) const {
if
(
_lrg_map
.
find_const
(
n
)
==
lidx
)
{
if
(
!
dump_once
++
)
{
tty
->
cr
();
b
->
dump_head
(
&
_cfg
.
_bbs
);
b
->
dump_head
(
&
_cfg
);
}
dump
(
n
);
continue
;
...
...
@@ -2314,7 +2312,7 @@ void PhaseChaitin::dump_lrg( uint lidx, bool defs_only ) const {
if
(
_lrg_map
.
find_const
(
m
)
==
lidx
)
{
if
(
!
dump_once
++
)
{
tty
->
cr
();
b
->
dump_head
(
&
_cfg
.
_bbs
);
b
->
dump_head
(
&
_cfg
);
}
dump
(
n
);
}
...
...
src/share/vm/opto/coalesce.cpp
浏览文件 @
3686ad20
...
...
@@ -52,7 +52,7 @@ void PhaseCoalesce::dump() const {
// Print a nice block header
tty
->
print
(
"B%d: "
,
b
->
_pre_order
);
for
(
j
=
1
;
j
<
b
->
num_preds
();
j
++
)
tty
->
print
(
"B%d "
,
_phc
.
_cfg
.
_bbs
[
b
->
pred
(
j
)
->
_idx
]
->
_pre_order
);
tty
->
print
(
"B%d "
,
_phc
.
_cfg
.
get_block_for_node
(
b
->
pred
(
j
))
->
_pre_order
);
tty
->
print
(
"-> "
);
for
(
j
=
0
;
j
<
b
->
_num_succs
;
j
++
)
tty
->
print
(
"B%d "
,
b
->
_succs
[
j
]
->
_pre_order
);
...
...
@@ -208,7 +208,7 @@ void PhaseAggressiveCoalesce::insert_copy_with_overlap( Block *b, Node *copy, ui
copy
->
set_req
(
idx
,
tmp
);
// Save source in temp early, before source is killed
b
->
_nodes
.
insert
(
kill_src_idx
,
tmp
);
_phc
.
_cfg
.
_bbs
.
map
(
tmp
->
_idx
,
b
);
_phc
.
_cfg
.
map_node_to_block
(
tmp
,
b
);
last_use_idx
++
;
}
...
...
@@ -286,7 +286,7 @@ void PhaseAggressiveCoalesce::insert_copies( Matcher &matcher ) {
Node
*
m
=
n
->
in
(
j
);
uint
src_name
=
_phc
.
_lrg_map
.
find
(
m
);
if
(
src_name
!=
phi_name
)
{
Block
*
pred
=
_phc
.
_cfg
.
_bbs
[
b
->
pred
(
j
)
->
_idx
]
;
Block
*
pred
=
_phc
.
_cfg
.
get_block_for_node
(
b
->
pred
(
j
))
;
Node
*
copy
;
assert
(
!
m
->
is_Con
()
||
m
->
is_Mach
(),
"all Con must be Mach"
);
// Rematerialize constants instead of copying them
...
...
@@ -305,7 +305,7 @@ void PhaseAggressiveCoalesce::insert_copies( Matcher &matcher ) {
}
// Insert the copy in the use-def chain
n
->
set_req
(
j
,
copy
);
_phc
.
_cfg
.
_bbs
.
map
(
copy
->
_idx
,
pred
);
_phc
.
_cfg
.
map_node_to_block
(
copy
,
pred
);
// Extend ("register allocate") the names array for the copy.
_phc
.
_lrg_map
.
extend
(
copy
->
_idx
,
phi_name
);
}
// End of if Phi names do not match
...
...
@@ -343,13 +343,13 @@ void PhaseAggressiveCoalesce::insert_copies( Matcher &matcher ) {
n
->
set_req
(
idx
,
copy
);
// Extend ("register allocate") the names array for the copy.
_phc
.
_lrg_map
.
extend
(
copy
->
_idx
,
name
);
_phc
.
_cfg
.
_bbs
.
map
(
copy
->
_idx
,
b
);
_phc
.
_cfg
.
map_node_to_block
(
copy
,
b
);
}
}
// End of is two-adr
// Insert a copy at a debug use for a lrg which has high frequency
if
(
b
->
_freq
<
OPTO_DEBUG_SPLIT_FREQ
||
b
->
is_uncommon
(
_phc
.
_cfg
.
_bbs
))
{
if
(
b
->
_freq
<
OPTO_DEBUG_SPLIT_FREQ
||
b
->
is_uncommon
(
&
_phc
.
_cfg
))
{
// Walk the debug inputs to the node and check for lrg freq
JVMState
*
jvms
=
n
->
jvms
();
uint
debug_start
=
jvms
?
jvms
->
debug_start
()
:
999999
;
...
...
@@ -391,7 +391,7 @@ void PhaseAggressiveCoalesce::insert_copies( Matcher &matcher ) {
uint
max_lrg_id
=
_phc
.
_lrg_map
.
max_lrg_id
();
_phc
.
new_lrg
(
copy
,
max_lrg_id
);
_phc
.
_lrg_map
.
set_max_lrg_id
(
max_lrg_id
+
1
);
_phc
.
_cfg
.
_bbs
.
map
(
copy
->
_idx
,
b
);
_phc
.
_cfg
.
map_node_to_block
(
copy
,
b
);
//tty->print_cr("Split a debug use in Aggressive Coalesce");
}
// End of if high frequency use/def
}
// End of for all debug inputs
...
...
@@ -437,7 +437,10 @@ void PhaseAggressiveCoalesce::coalesce( Block *b ) {
Block
*
bs
=
b
->
_succs
[
i
];
// Find index of 'b' in 'bs' predecessors
uint
j
=
1
;
while
(
_phc
.
_cfg
.
_bbs
[
bs
->
pred
(
j
)
->
_idx
]
!=
b
)
j
++
;
while
(
_phc
.
_cfg
.
get_block_for_node
(
bs
->
pred
(
j
))
!=
b
)
{
j
++
;
}
// Visit all the Phis in successor block
for
(
uint
k
=
1
;
k
<
bs
->
_nodes
.
size
();
k
++
)
{
Node
*
n
=
bs
->
_nodes
[
k
];
...
...
@@ -510,9 +513,9 @@ void PhaseConservativeCoalesce::union_helper( Node *lr1_node, Node *lr2_node, ui
if
(
bindex
<
b
->
_fhrp_index
)
b
->
_fhrp_index
--
;
// Stretched lr1; add it to liveness of intermediate blocks
Block
*
b2
=
_phc
.
_cfg
.
_bbs
[
src_copy
->
_idx
]
;
Block
*
b2
=
_phc
.
_cfg
.
get_block_for_node
(
src_copy
)
;
while
(
b
!=
b2
)
{
b
=
_phc
.
_cfg
.
_bbs
[
b
->
pred
(
1
)
->
_idx
]
;
b
=
_phc
.
_cfg
.
get_block_for_node
(
b
->
pred
(
1
))
;
_phc
.
_live
->
live
(
b
)
->
insert
(
lr1
);
}
}
...
...
@@ -532,7 +535,7 @@ uint PhaseConservativeCoalesce::compute_separating_interferences(Node *dst_copy,
bindex2
--
;
// Chain backwards 1 instruction
while
(
bindex2
==
0
)
{
// At block start, find prior block
assert
(
b2
->
num_preds
()
==
2
,
"cannot double coalesce across c-flow"
);
b2
=
_phc
.
_cfg
.
_bbs
[
b2
->
pred
(
1
)
->
_idx
]
;
b2
=
_phc
.
_cfg
.
get_block_for_node
(
b2
->
pred
(
1
))
;
bindex2
=
b2
->
end_idx
()
-
1
;
}
// Get prior instruction
...
...
@@ -676,8 +679,8 @@ bool PhaseConservativeCoalesce::copy_copy(Node *dst_copy, Node *src_copy, Block
if
(
UseFPUForSpilling
&&
rm
.
is_AllStack
()
)
{
// Don't coalesce when frequency difference is large
Block
*
dst_b
=
_phc
.
_cfg
.
_bbs
[
dst_copy
->
_idx
]
;
Block
*
src_def_b
=
_phc
.
_cfg
.
_bbs
[
src_def
->
_idx
]
;
Block
*
dst_b
=
_phc
.
_cfg
.
get_block_for_node
(
dst_copy
)
;
Block
*
src_def_b
=
_phc
.
_cfg
.
get_block_for_node
(
src_def
)
;
if
(
src_def_b
->
_freq
>
10
*
dst_b
->
_freq
)
return
false
;
}
...
...
@@ -690,7 +693,7 @@ bool PhaseConservativeCoalesce::copy_copy(Node *dst_copy, Node *src_copy, Block
// Another early bail-out test is when we are double-coalescing and the
// 2 copies are separated by some control flow.
if
(
dst_copy
!=
src_copy
)
{
Block
*
src_b
=
_phc
.
_cfg
.
_bbs
[
src_copy
->
_idx
]
;
Block
*
src_b
=
_phc
.
_cfg
.
get_block_for_node
(
src_copy
)
;
Block
*
b2
=
b
;
while
(
b2
!=
src_b
)
{
if
(
b2
->
num_preds
()
>
2
){
// Found merge-point
...
...
@@ -701,7 +704,7 @@ bool PhaseConservativeCoalesce::copy_copy(Node *dst_copy, Node *src_copy, Block
//record_bias( _phc._lrgs, lr1, lr2 );
return
false
;
// To hard to find all interferences
}
b2
=
_phc
.
_cfg
.
_bbs
[
b2
->
pred
(
1
)
->
_idx
]
;
b2
=
_phc
.
_cfg
.
get_block_for_node
(
b2
->
pred
(
1
))
;
}
}
...
...
@@ -786,8 +789,9 @@ bool PhaseConservativeCoalesce::copy_copy(Node *dst_copy, Node *src_copy, Block
// Conservative (but pessimistic) copy coalescing of a single block
void
PhaseConservativeCoalesce
::
coalesce
(
Block
*
b
)
{
// Bail out on infrequent blocks
if
(
b
->
is_uncommon
(
_phc
.
_cfg
.
_bbs
)
)
if
(
b
->
is_uncommon
(
&
_phc
.
_cfg
))
{
return
;
}
// Check this block for copies.
for
(
uint
i
=
1
;
i
<
b
->
end_idx
();
i
++
)
{
// Check for actual copies on inputs. Coalesce a copy into its
...
...
src/share/vm/opto/compile.cpp
浏览文件 @
3686ad20
...
...
@@ -2262,7 +2262,7 @@ void Compile::dump_asm(int *pcs, uint pc_limit) {
tty
->
print
(
"%3.3x "
,
pcs
[
n
->
_idx
]);
else
tty
->
print
(
" "
);
b
->
dump_head
(
&
_cfg
->
_bbs
);
b
->
dump_head
(
_cfg
);
if
(
b
->
is_connector
())
{
tty
->
print_cr
(
" # Empty connector block"
);
}
else
if
(
b
->
num_preds
()
==
2
&&
b
->
pred
(
1
)
->
is_CatchProj
()
&&
b
->
pred
(
1
)
->
as_CatchProj
()
->
_con
==
CatchProjNode
::
fall_through_index
)
{
...
...
@@ -3525,7 +3525,7 @@ void Compile::ConstantTable::add(Constant& con) {
}
Compile
::
Constant
Compile
::
ConstantTable
::
add
(
MachConstantNode
*
n
,
BasicType
type
,
jvalue
value
)
{
Block
*
b
=
Compile
::
current
()
->
cfg
()
->
_bbs
[
n
->
_idx
]
;
Block
*
b
=
Compile
::
current
()
->
cfg
()
->
get_block_for_node
(
n
)
;
Constant
con
(
type
,
value
,
b
->
_freq
);
add
(
con
);
return
con
;
...
...
src/share/vm/opto/domgraph.cpp
浏览文件 @
3686ad20
...
...
@@ -105,8 +105,8 @@ void PhaseCFG::Dominators( ) {
// Step 2:
Node
*
whead
=
w
->
_block
->
head
();
for
(
uint
j
=
1
;
j
<
whead
->
req
();
j
++
)
{
Block
*
b
=
_bbs
[
whead
->
in
(
j
)
->
_idx
]
;
for
(
uint
j
=
1
;
j
<
whead
->
req
();
j
++
)
{
Block
*
b
=
get_block_for_node
(
whead
->
in
(
j
))
;
Tarjan
*
vx
=
&
tarjan
[
b
->
_pre_order
];
Tarjan
*
u
=
vx
->
EVAL
();
if
(
u
->
_semi
<
w
->
_semi
)
...
...
src/share/vm/opto/gcm.cpp
浏览文件 @
3686ad20
此差异已折叠。
点击以展开。
src/share/vm/opto/idealGraphPrinter.cpp
浏览文件 @
3686ad20
...
...
@@ -413,9 +413,9 @@ void IdealGraphPrinter::visit_node(Node *n, bool edges, VectorSet* temp_set) {
print_prop
(
"debug_idx"
,
node
->
_debug_idx
);
#endif
if
(
C
->
cfg
()
!=
NULL
)
{
Block
*
block
=
C
->
cfg
()
->
_bbs
[
node
->
_idx
]
;
if
(
block
==
NULL
)
{
if
(
C
->
cfg
()
!=
NULL
)
{
Block
*
block
=
C
->
cfg
()
->
get_block_for_node
(
node
)
;
if
(
block
==
NULL
)
{
print_prop
(
"block"
,
C
->
cfg
()
->
_blocks
[
0
]
->
_pre_order
);
}
else
{
print_prop
(
"block"
,
block
->
_pre_order
);
...
...
src/share/vm/opto/ifg.cpp
浏览文件 @
3686ad20
...
...
@@ -565,7 +565,7 @@ uint PhaseChaitin::build_ifg_physical( ResourceArea *a ) {
lrgs
(
r
).
_def
=
0
;
}
n
->
disconnect_inputs
(
NULL
,
C
);
_cfg
.
_bbs
.
map
(
n
->
_idx
,
NULL
);
_cfg
.
unmap_node_from_block
(
n
);
n
->
replace_by
(
C
->
top
());
// Since yanking a Node from block, high pressure moves up one
hrp_index
[
0
]
--
;
...
...
@@ -607,7 +607,7 @@ uint PhaseChaitin::build_ifg_physical( ResourceArea *a ) {
if
(
n
->
is_SpillCopy
()
&&
lrgs
(
r
).
is_singledef
()
// MultiDef live range can still split
&&
n
->
outcnt
()
==
1
// and use must be in this block
&&
_cfg
.
_bbs
[
n
->
unique_out
()
->
_idx
]
==
b
)
{
&&
_cfg
.
get_block_for_node
(
n
->
unique_out
())
==
b
)
{
// All single-use MachSpillCopy(s) that immediately precede their
// use must color early. If a longer live range steals their
// color, the spill copy will split and may push another spill copy
...
...
src/share/vm/opto/lcm.cpp
浏览文件 @
3686ad20
...
...
@@ -237,7 +237,7 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
}
// Check ctrl input to see if the null-check dominates the memory op
Block
*
cb
=
cfg
->
_bbs
[
mach
->
_idx
]
;
Block
*
cb
=
cfg
->
get_block_for_node
(
mach
)
;
cb
=
cb
->
_idom
;
// Always hoist at least 1 block
if
(
!
was_store
)
{
// Stores can be hoisted only one block
while
(
cb
->
_dom_depth
>
(
_dom_depth
+
1
))
...
...
@@ -262,7 +262,7 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
if
(
is_decoden
)
continue
;
}
// Block of memory-op input
Block
*
inb
=
cfg
->
_bbs
[
mach
->
in
(
j
)
->
_idx
]
;
Block
*
inb
=
cfg
->
get_block_for_node
(
mach
->
in
(
j
))
;
Block
*
b
=
this
;
// Start from nul check
while
(
b
!=
inb
&&
b
->
_dom_depth
>
inb
->
_dom_depth
)
b
=
b
->
_idom
;
// search upwards for input
...
...
@@ -272,7 +272,7 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
}
if
(
j
>
0
)
continue
;
Block
*
mb
=
cfg
->
_bbs
[
mach
->
_idx
]
;
Block
*
mb
=
cfg
->
get_block_for_node
(
mach
)
;
// Hoisting stores requires more checks for the anti-dependence case.
// Give up hoisting if we have to move the store past any load.
if
(
was_store
)
{
...
...
@@ -291,7 +291,7 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
break
;
// Found anti-dependent load
// Make sure control does not do a merge (would have to check allpaths)
if
(
b
->
num_preds
()
!=
2
)
break
;
b
=
cfg
->
_bbs
[
b
->
pred
(
1
)
->
_idx
]
;
// Move up to predecessor block
b
=
cfg
->
get_block_for_node
(
b
->
pred
(
1
))
;
// Move up to predecessor block
}
if
(
b
!=
this
)
continue
;
}
...
...
@@ -303,15 +303,15 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
// Found a candidate! Pick one with least dom depth - the highest
// in the dom tree should be closest to the null check.
if
(
!
best
||
cfg
->
_bbs
[
mach
->
_idx
]
->
_dom_depth
<
cfg
->
_bbs
[
best
->
_idx
]
->
_dom_depth
)
{
if
(
best
==
NULL
||
cfg
->
get_block_for_node
(
mach
)
->
_dom_depth
<
cfg
->
get_block_for_node
(
best
)
->
_dom_depth
)
{
best
=
mach
;
bidx
=
vidx
;
}
}
// No candidate!
if
(
!
best
)
return
;
if
(
best
==
NULL
)
{
return
;
}
// ---- Found an implicit null check
extern
int
implicit_null_checks
;
...
...
@@ -319,29 +319,29 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
if
(
is_decoden
)
{
// Check if we need to hoist decodeHeapOop_not_null first.
Block
*
valb
=
cfg
->
_bbs
[
val
->
_idx
]
;
Block
*
valb
=
cfg
->
get_block_for_node
(
val
)
;
if
(
this
!=
valb
&&
this
->
_dom_depth
<
valb
->
_dom_depth
)
{
// Hoist it up to the end of the test block.
valb
->
find_remove
(
val
);
this
->
add_inst
(
val
);
cfg
->
_bbs
.
map
(
val
->
_idx
,
this
);
cfg
->
map_node_to_block
(
val
,
this
);
// DecodeN on x86 may kill flags. Check for flag-killing projections
// that also need to be hoisted.
for
(
DUIterator_Fast
jmax
,
j
=
val
->
fast_outs
(
jmax
);
j
<
jmax
;
j
++
)
{
Node
*
n
=
val
->
fast_out
(
j
);
if
(
n
->
is_MachProj
()
)
{
cfg
->
_bbs
[
n
->
_idx
]
->
find_remove
(
n
);
cfg
->
get_block_for_node
(
n
)
->
find_remove
(
n
);
this
->
add_inst
(
n
);
cfg
->
_bbs
.
map
(
n
->
_idx
,
this
);
cfg
->
map_node_to_block
(
n
,
this
);
}
}
}
}
// Hoist the memory candidate up to the end of the test block.
Block
*
old_block
=
cfg
->
_bbs
[
best
->
_idx
]
;
Block
*
old_block
=
cfg
->
get_block_for_node
(
best
)
;
old_block
->
find_remove
(
best
);
add_inst
(
best
);
cfg
->
_bbs
.
map
(
best
->
_idx
,
this
);
cfg
->
map_node_to_block
(
best
,
this
);
// Move the control dependence
if
(
best
->
in
(
0
)
&&
best
->
in
(
0
)
==
old_block
->
_nodes
[
0
])
...
...
@@ -352,9 +352,9 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
for
(
DUIterator_Fast
jmax
,
j
=
best
->
fast_outs
(
jmax
);
j
<
jmax
;
j
++
)
{
Node
*
n
=
best
->
fast_out
(
j
);
if
(
n
->
is_MachProj
()
)
{
cfg
->
_bbs
[
n
->
_idx
]
->
find_remove
(
n
);
cfg
->
get_block_for_node
(
n
)
->
find_remove
(
n
);
add_inst
(
n
);
cfg
->
_bbs
.
map
(
n
->
_idx
,
this
);
cfg
->
map_node_to_block
(
n
,
this
);
}
}
...
...
@@ -385,7 +385,7 @@ void Block::implicit_null_check(PhaseCFG *cfg, Node *proj, Node *val, int allowe
Node
*
old_tst
=
proj
->
in
(
0
);
MachNode
*
nul_chk
=
new
(
C
)
MachNullCheckNode
(
old_tst
->
in
(
0
),
best
,
bidx
);
_nodes
.
map
(
end_idx
(),
nul_chk
);
cfg
->
_bbs
.
map
(
nul_chk
->
_idx
,
this
);
cfg
->
map_node_to_block
(
nul_chk
,
this
);
// Redirect users of old_test to nul_chk
for
(
DUIterator_Last
i2min
,
i2
=
old_tst
->
last_outs
(
i2min
);
i2
>=
i2min
;
--
i2
)
old_tst
->
last_out
(
i2
)
->
set_req
(
0
,
nul_chk
);
...
...
@@ -468,7 +468,7 @@ Node *Block::select(PhaseCFG *cfg, Node_List &worklist, GrowableArray<int> &read
Node
*
use
=
n
->
fast_out
(
j
);
// The use is a conditional branch, make them adjacent
if
(
use
->
is_MachIf
()
&&
cfg
->
_bbs
[
use
->
_idx
]
==
this
)
{
if
(
use
->
is_MachIf
()
&&
cfg
->
get_block_for_node
(
use
)
==
this
)
{
found_machif
=
true
;
break
;
}
...
...
@@ -529,13 +529,14 @@ Node *Block::select(PhaseCFG *cfg, Node_List &worklist, GrowableArray<int> &read
//------------------------------set_next_call----------------------------------
void
Block
::
set_next_call
(
Node
*
n
,
VectorSet
&
next_call
,
Block_Array
&
bbs
)
{
void
Block
::
set_next_call
(
Node
*
n
,
VectorSet
&
next_call
,
PhaseCFG
*
cfg
)
{
if
(
next_call
.
test_set
(
n
->
_idx
)
)
return
;
for
(
uint
i
=
0
;
i
<
n
->
len
();
i
++
)
{
Node
*
m
=
n
->
in
(
i
);
if
(
!
m
)
continue
;
// must see all nodes in block that precede call
if
(
bbs
[
m
->
_idx
]
==
this
)
set_next_call
(
m
,
next_call
,
bbs
);
if
(
cfg
->
get_block_for_node
(
m
)
==
this
)
{
set_next_call
(
m
,
next_call
,
cfg
);
}
}
}
...
...
@@ -545,12 +546,12 @@ void Block::set_next_call( Node *n, VectorSet &next_call, Block_Array &bbs ) {
// next subroutine call get priority - basically it moves things NOT needed
// for the next call till after the call. This prevents me from trying to
// carry lots of stuff live across a call.
void
Block
::
needed_for_next_call
(
Node
*
this_call
,
VectorSet
&
next_call
,
Block_Array
&
bbs
)
{
void
Block
::
needed_for_next_call
(
Node
*
this_call
,
VectorSet
&
next_call
,
PhaseCFG
*
cfg
)
{
// Find the next control-defining Node in this block
Node
*
call
=
NULL
;
for
(
DUIterator_Fast
imax
,
i
=
this_call
->
fast_outs
(
imax
);
i
<
imax
;
i
++
)
{
Node
*
m
=
this_call
->
fast_out
(
i
);
if
(
bbs
[
m
->
_idx
]
==
this
&&
// Local-block user
if
(
cfg
->
get_block_for_node
(
m
)
==
this
&&
// Local-block user
m
!=
this_call
&&
// Not self-start node
m
->
is_MachCall
()
)
call
=
m
;
...
...
@@ -558,7 +559,7 @@ void Block::needed_for_next_call(Node *this_call, VectorSet &next_call, Block_Ar
}
if
(
call
==
NULL
)
return
;
// No next call (e.g., block end is near)
// Set next-call for all inputs to this call
set_next_call
(
call
,
next_call
,
bbs
);
set_next_call
(
call
,
next_call
,
cfg
);
}
//------------------------------add_call_kills-------------------------------------
...
...
@@ -578,7 +579,7 @@ void Block::add_call_kills(MachProjNode *proj, RegMask& regs, const char* save_p
//------------------------------sched_call-------------------------------------
uint
Block
::
sched_call
(
Matcher
&
matcher
,
Block_Array
&
bbs
,
uint
node_cnt
,
Node_List
&
worklist
,
GrowableArray
<
int
>
&
ready_cnt
,
MachCallNode
*
mcall
,
VectorSet
&
next_call
)
{
uint
Block
::
sched_call
(
Matcher
&
matcher
,
PhaseCFG
*
cfg
,
uint
node_cnt
,
Node_List
&
worklist
,
GrowableArray
<
int
>
&
ready_cnt
,
MachCallNode
*
mcall
,
VectorSet
&
next_call
)
{
RegMask
regs
;
// Schedule all the users of the call right now. All the users are
...
...
@@ -597,12 +598,14 @@ uint Block::sched_call( Matcher &matcher, Block_Array &bbs, uint node_cnt, Node_
// Check for scheduling the next control-definer
if
(
n
->
bottom_type
()
==
Type
::
CONTROL
)
// Warm up next pile of heuristic bits
needed_for_next_call
(
n
,
next_call
,
bbs
);
needed_for_next_call
(
n
,
next_call
,
cfg
);
// Children of projections are now all ready
for
(
DUIterator_Fast
jmax
,
j
=
n
->
fast_outs
(
jmax
);
j
<
jmax
;
j
++
)
{
Node
*
m
=
n
->
fast_out
(
j
);
// Get user
if
(
bbs
[
m
->
_idx
]
!=
this
)
continue
;
if
(
cfg
->
get_block_for_node
(
m
)
!=
this
)
{
continue
;
}
if
(
m
->
is_Phi
()
)
continue
;
int
m_cnt
=
ready_cnt
.
at
(
m
->
_idx
)
-
1
;
ready_cnt
.
at_put
(
m
->
_idx
,
m_cnt
);
...
...
@@ -620,7 +623,7 @@ uint Block::sched_call( Matcher &matcher, Block_Array &bbs, uint node_cnt, Node_
uint
r_cnt
=
mcall
->
tf
()
->
range
()
->
cnt
();
int
op
=
mcall
->
ideal_Opcode
();
MachProjNode
*
proj
=
new
(
matcher
.
C
)
MachProjNode
(
mcall
,
r_cnt
+
1
,
RegMask
::
Empty
,
MachProjNode
::
fat_proj
);
bbs
.
map
(
proj
->
_idx
,
this
);
cfg
->
map_node_to_block
(
proj
,
this
);
_nodes
.
insert
(
node_cnt
++
,
proj
);
// Select the right register save policy.
...
...
@@ -708,7 +711,7 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
uint
local
=
0
;
for
(
uint
j
=
0
;
j
<
cnt
;
j
++
)
{
Node
*
m
=
n
->
in
(
j
);
if
(
m
&&
cfg
->
_bbs
[
m
->
_idx
]
==
this
&&
!
m
->
is_top
()
)
if
(
m
&&
cfg
->
get_block_for_node
(
m
)
==
this
&&
!
m
->
is_top
()
)
local
++
;
// One more block-local input
}
ready_cnt
.
at_put
(
n
->
_idx
,
local
);
// Count em up
...
...
@@ -720,7 +723,7 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
for
(
uint
prec
=
n
->
req
();
prec
<
n
->
len
();
prec
++
)
{
Node
*
oop_store
=
n
->
in
(
prec
);
if
(
oop_store
!=
NULL
)
{
assert
(
cfg
->
_bbs
[
oop_store
->
_idx
]
->
_dom_depth
<=
this
->
_dom_depth
,
"oop_store must dominate card-mark"
);
assert
(
cfg
->
get_block_for_node
(
oop_store
)
->
_dom_depth
<=
this
->
_dom_depth
,
"oop_store must dominate card-mark"
);
}
}
}
...
...
@@ -753,7 +756,7 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
Node
*
n
=
_nodes
[
i3
];
// Get pre-scheduled
for
(
DUIterator_Fast
jmax
,
j
=
n
->
fast_outs
(
jmax
);
j
<
jmax
;
j
++
)
{
Node
*
m
=
n
->
fast_out
(
j
);
if
(
cfg
->
_bbs
[
m
->
_idx
]
==
this
)
{
// Local-block user
if
(
cfg
->
get_block_for_node
(
m
)
==
this
)
{
// Local-block user
int
m_cnt
=
ready_cnt
.
at
(
m
->
_idx
)
-
1
;
ready_cnt
.
at_put
(
m
->
_idx
,
m_cnt
);
// Fix ready count
}
...
...
@@ -786,7 +789,7 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
}
// Warm up the 'next_call' heuristic bits
needed_for_next_call
(
_nodes
[
0
],
next_call
,
cfg
->
_bbs
);
needed_for_next_call
(
_nodes
[
0
],
next_call
,
cfg
);
#ifndef PRODUCT
if
(
cfg
->
trace_opto_pipelining
())
{
...
...
@@ -837,7 +840,7 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
#endif
if
(
n
->
is_MachCall
()
)
{
MachCallNode
*
mcall
=
n
->
as_MachCall
();
phi_cnt
=
sched_call
(
matcher
,
cfg
->
_bbs
,
phi_cnt
,
worklist
,
ready_cnt
,
mcall
,
next_call
);
phi_cnt
=
sched_call
(
matcher
,
cfg
,
phi_cnt
,
worklist
,
ready_cnt
,
mcall
,
next_call
);
continue
;
}
...
...
@@ -847,7 +850,7 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
regs
.
OR
(
n
->
out_RegMask
());
MachProjNode
*
proj
=
new
(
matcher
.
C
)
MachProjNode
(
n
,
1
,
RegMask
::
Empty
,
MachProjNode
::
fat_proj
);
cfg
->
_bbs
.
map
(
proj
->
_idx
,
this
);
cfg
->
map_node_to_block
(
proj
,
this
);
_nodes
.
insert
(
phi_cnt
++
,
proj
);
add_call_kills
(
proj
,
regs
,
matcher
.
_c_reg_save_policy
,
false
);
...
...
@@ -856,7 +859,9 @@ bool Block::schedule_local(PhaseCFG *cfg, Matcher &matcher, GrowableArray<int> &
// Children are now all ready
for
(
DUIterator_Fast
i5max
,
i5
=
n
->
fast_outs
(
i5max
);
i5
<
i5max
;
i5
++
)
{
Node
*
m
=
n
->
fast_out
(
i5
);
// Get user
if
(
cfg
->
_bbs
[
m
->
_idx
]
!=
this
)
continue
;
if
(
cfg
->
get_block_for_node
(
m
)
!=
this
)
{
continue
;
}
if
(
m
->
is_Phi
()
)
continue
;
if
(
m
->
_idx
>=
max_idx
)
{
// new node, skip it
assert
(
m
->
is_MachProj
()
&&
n
->
is_Mach
()
&&
n
->
as_Mach
()
->
has_call
(),
"unexpected node types"
);
...
...
@@ -914,7 +919,7 @@ static void catch_cleanup_fix_all_inputs(Node *use, Node *old_def, Node *new_def
}
//------------------------------catch_cleanup_find_cloned_def------------------
static
Node
*
catch_cleanup_find_cloned_def
(
Block
*
use_blk
,
Node
*
def
,
Block
*
def_blk
,
Block_Array
&
bbs
,
int
n_clone_idx
)
{
static
Node
*
catch_cleanup_find_cloned_def
(
Block
*
use_blk
,
Node
*
def
,
Block
*
def_blk
,
PhaseCFG
*
cfg
,
int
n_clone_idx
)
{
assert
(
use_blk
!=
def_blk
,
"Inter-block cleanup only"
);
// The use is some block below the Catch. Find and return the clone of the def
...
...
@@ -940,7 +945,8 @@ static Node *catch_cleanup_find_cloned_def(Block *use_blk, Node *def, Block *def
// PhiNode, the PhiNode uses from the def and IT's uses need fixup.
Node_Array
inputs
=
new
Node_List
(
Thread
::
current
()
->
resource_area
());
for
(
uint
k
=
1
;
k
<
use_blk
->
num_preds
();
k
++
)
{
inputs
.
map
(
k
,
catch_cleanup_find_cloned_def
(
bbs
[
use_blk
->
pred
(
k
)
->
_idx
],
def
,
def_blk
,
bbs
,
n_clone_idx
));
Block
*
block
=
cfg
->
get_block_for_node
(
use_blk
->
pred
(
k
));
inputs
.
map
(
k
,
catch_cleanup_find_cloned_def
(
block
,
def
,
def_blk
,
cfg
,
n_clone_idx
));
}
// Check to see if the use_blk already has an identical phi inserted.
...
...
@@ -962,7 +968,7 @@ static Node *catch_cleanup_find_cloned_def(Block *use_blk, Node *def, Block *def
if
(
fixup
==
NULL
)
{
Node
*
new_phi
=
PhiNode
::
make
(
use_blk
->
head
(),
def
);
use_blk
->
_nodes
.
insert
(
1
,
new_phi
);
bbs
.
map
(
new_phi
->
_idx
,
use_blk
);
cfg
->
map_node_to_block
(
new_phi
,
use_blk
);
for
(
uint
k
=
1
;
k
<
use_blk
->
num_preds
();
k
++
)
{
new_phi
->
set_req
(
k
,
inputs
[
k
]);
}
...
...
@@ -1002,17 +1008,17 @@ static void catch_cleanup_intra_block(Node *use, Node *def, Block *blk, int beg,
//------------------------------catch_cleanup_inter_block---------------------
// Fix all input edges in use that reference "def". The use is in a different
// block than the def.
static
void
catch_cleanup_inter_block
(
Node
*
use
,
Block
*
use_blk
,
Node
*
def
,
Block
*
def_blk
,
Block_Array
&
bbs
,
int
n_clone_idx
)
{
static
void
catch_cleanup_inter_block
(
Node
*
use
,
Block
*
use_blk
,
Node
*
def
,
Block
*
def_blk
,
PhaseCFG
*
cfg
,
int
n_clone_idx
)
{
if
(
!
use_blk
)
return
;
// Can happen if the use is a precedence edge
Node
*
new_def
=
catch_cleanup_find_cloned_def
(
use_blk
,
def
,
def_blk
,
bbs
,
n_clone_idx
);
Node
*
new_def
=
catch_cleanup_find_cloned_def
(
use_blk
,
def
,
def_blk
,
cfg
,
n_clone_idx
);
catch_cleanup_fix_all_inputs
(
use
,
def
,
new_def
);
}
//------------------------------call_catch_cleanup-----------------------------
// If we inserted any instructions between a Call and his CatchNode,
// clone the instructions on all paths below the Catch.
void
Block
::
call_catch_cleanup
(
Block_Array
&
bbs
,
Compile
*
C
)
{
void
Block
::
call_catch_cleanup
(
PhaseCFG
*
cfg
,
Compile
*
C
)
{
// End of region to clone
uint
end
=
end_idx
();
...
...
@@ -1037,7 +1043,7 @@ void Block::call_catch_cleanup(Block_Array &bbs, Compile* C) {
// since clones dominate on each path.
Node
*
clone
=
_nodes
[
j
-
1
]
->
clone
();
sb
->
_nodes
.
insert
(
1
,
clone
);
bbs
.
map
(
clone
->
_idx
,
sb
);
cfg
->
map_node_to_block
(
clone
,
sb
);
}
}
...
...
@@ -1054,18 +1060,19 @@ void Block::call_catch_cleanup(Block_Array &bbs, Compile* C) {
uint
max
=
out
->
size
();
for
(
uint
j
=
0
;
j
<
max
;
j
++
)
{
// For all users
Node
*
use
=
out
->
pop
();
Block
*
buse
=
bbs
[
use
->
_idx
]
;
Block
*
buse
=
cfg
->
get_block_for_node
(
use
)
;
if
(
use
->
is_Phi
()
)
{
for
(
uint
k
=
1
;
k
<
use
->
req
();
k
++
)
if
(
use
->
in
(
k
)
==
n
)
{
Node
*
fixup
=
catch_cleanup_find_cloned_def
(
bbs
[
buse
->
pred
(
k
)
->
_idx
],
n
,
this
,
bbs
,
n_clone_idx
);
Block
*
block
=
cfg
->
get_block_for_node
(
buse
->
pred
(
k
));
Node
*
fixup
=
catch_cleanup_find_cloned_def
(
block
,
n
,
this
,
cfg
,
n_clone_idx
);
use
->
set_req
(
k
,
fixup
);
}
}
else
{
if
(
this
==
buse
)
{
catch_cleanup_intra_block
(
use
,
n
,
this
,
beg
,
n_clone_idx
);
}
else
{
catch_cleanup_inter_block
(
use
,
buse
,
n
,
this
,
bbs
,
n_clone_idx
);
catch_cleanup_inter_block
(
use
,
buse
,
n
,
this
,
cfg
,
n_clone_idx
);
}
}
}
// End for all users
...
...
src/share/vm/opto/live.cpp
浏览文件 @
3686ad20
...
...
@@ -101,7 +101,7 @@ void PhaseLive::compute(uint maxlrg) {
for
(
uint
k
=
1
;
k
<
cnt
;
k
++
)
{
Node
*
nk
=
n
->
in
(
k
);
uint
nkidx
=
nk
->
_idx
;
if
(
_cfg
.
_bbs
[
nkidx
]
!=
b
)
{
if
(
_cfg
.
get_block_for_node
(
nk
)
!=
b
)
{
uint
u
=
_names
[
nkidx
];
use
->
insert
(
u
);
DEBUG_ONLY
(
def_outside
->
insert
(
u
);)
...
...
@@ -121,7 +121,7 @@ void PhaseLive::compute(uint maxlrg) {
// Push these live-in things to predecessors
for
(
uint
l
=
1
;
l
<
b
->
num_preds
();
l
++
)
{
Block
*
p
=
_cfg
.
_bbs
[
b
->
pred
(
l
)
->
_idx
]
;
Block
*
p
=
_cfg
.
get_block_for_node
(
b
->
pred
(
l
))
;
add_liveout
(
p
,
use
,
first_pass
);
// PhiNode uses go in the live-out set of prior blocks.
...
...
@@ -142,8 +142,10 @@ void PhaseLive::compute(uint maxlrg) {
assert
(
delta
->
count
(),
"missing delta set"
);
// Add new-live-in to predecessors live-out sets
for
(
uint
l
=
1
;
l
<
b
->
num_preds
();
l
++
)
add_liveout
(
_cfg
.
_bbs
[
b
->
pred
(
l
)
->
_idx
],
delta
,
first_pass
);
for
(
uint
l
=
1
;
l
<
b
->
num_preds
();
l
++
)
{
Block
*
block
=
_cfg
.
get_block_for_node
(
b
->
pred
(
l
));
add_liveout
(
block
,
delta
,
first_pass
);
}
freeset
(
b
);
}
// End of while-worklist-not-empty
...
...
src/share/vm/opto/loopTransform.cpp
浏览文件 @
3686ad20
...
...
@@ -624,8 +624,6 @@ bool IdealLoopTree::policy_maximally_unroll( PhaseIdealLoop *phase ) const {
}
#define MAX_UNROLL 16 // maximum number of unrolls for main loop
//------------------------------policy_unroll----------------------------------
// Return TRUE or FALSE if the loop should be unrolled or not. Unroll if
// the loop is a CountedLoop and the body is small enough.
...
...
@@ -642,7 +640,7 @@ bool IdealLoopTree::policy_unroll( PhaseIdealLoop *phase ) const {
if
(
cl
->
trip_count
()
<=
(
uint
)(
cl
->
is_normal_loop
()
?
2
:
1
))
return
false
;
int
future_unroll_ct
=
cl
->
unrolled_count
()
*
2
;
if
(
future_unroll_ct
>
MAX_UNROLL
)
return
false
;
if
(
future_unroll_ct
>
LoopMaxUnroll
)
return
false
;
// Check for initial stride being a small enough constant
if
(
abs
(
cl
->
stride_con
())
>
(
1
<<
2
)
*
future_unroll_ct
)
return
false
;
...
...
src/share/vm/opto/node.hpp
浏览文件 @
3686ad20
...
...
@@ -42,7 +42,6 @@ class AliasInfo;
class
AllocateArrayNode
;
class
AllocateNode
;
class
Block
;
class
Block_Array
;
class
BoolNode
;
class
BoxLockNode
;
class
CMoveNode
;
...
...
src/share/vm/opto/output.cpp
浏览文件 @
3686ad20
...
...
@@ -68,7 +68,6 @@ void Compile::Output() {
return
;
}
// Make sure I can find the Start Node
Block_Array
&
bbs
=
_cfg
->
_bbs
;
Block
*
entry
=
_cfg
->
_blocks
[
1
];
Block
*
broot
=
_cfg
->
_broot
;
...
...
@@ -77,8 +76,8 @@ void Compile::Output() {
// Replace StartNode with prolog
MachPrologNode
*
prolog
=
new
(
this
)
MachPrologNode
();
entry
->
_nodes
.
map
(
0
,
prolog
);
bbs
.
map
(
prolog
->
_idx
,
entry
);
bbs
.
map
(
start
->
_idx
,
NULL
);
// start is no longer in any block
_cfg
->
map_node_to_block
(
prolog
,
entry
);
_cfg
->
unmap_node_from_block
(
start
);
// start is no longer in any block
// Virtual methods need an unverified entry point
...
...
@@ -117,8 +116,7 @@ void Compile::Output() {
if
(
m
->
is_Mach
()
&&
m
->
as_Mach
()
->
ideal_Opcode
()
!=
Op_Halt
)
{
MachEpilogNode
*
epilog
=
new
(
this
)
MachEpilogNode
(
m
->
as_Mach
()
->
ideal_Opcode
()
==
Op_Return
);
b
->
add_inst
(
epilog
);
bbs
.
map
(
epilog
->
_idx
,
b
);
//_regalloc->set_bad(epilog->_idx); // Already initialized this way.
_cfg
->
map_node_to_block
(
epilog
,
b
);
}
}
}
...
...
@@ -252,7 +250,7 @@ void Compile::Insert_zap_nodes() {
if
(
insert
)
{
Node
*
zap
=
call_zap_node
(
n
->
as_MachSafePoint
(),
i
);
b
->
_nodes
.
insert
(
j
,
zap
);
_cfg
->
_bbs
.
map
(
zap
->
_idx
,
b
);
_cfg
->
map_node_to_block
(
zap
,
b
);
++
j
;
}
}
...
...
@@ -1234,7 +1232,7 @@ void Compile::fill_buffer(CodeBuffer* cb, uint* blk_starts) {
#ifdef ASSERT
if
(
!
b
->
is_connector
())
{
stringStream
st
;
b
->
dump_head
(
&
_cfg
->
_bbs
,
&
st
);
b
->
dump_head
(
_cfg
,
&
st
);
MacroAssembler
(
cb
).
block_comment
(
st
.
as_string
());
}
jmp_target
[
i
]
=
0
;
...
...
@@ -1310,7 +1308,7 @@ void Compile::fill_buffer(CodeBuffer* cb, uint* blk_starts) {
MachNode
*
nop
=
new
(
this
)
MachNopNode
(
nops_cnt
);
b
->
_nodes
.
insert
(
j
++
,
nop
);
last_inst
++
;
_cfg
->
_bbs
.
map
(
nop
->
_idx
,
b
);
_cfg
->
map_node_to_block
(
nop
,
b
);
nop
->
emit
(
*
cb
,
_regalloc
);
cb
->
flush_bundle
(
true
);
current_offset
=
cb
->
insts_size
();
...
...
@@ -1395,7 +1393,7 @@ void Compile::fill_buffer(CodeBuffer* cb, uint* blk_starts) {
if
(
needs_padding
&&
replacement
->
avoid_back_to_back
())
{
MachNode
*
nop
=
new
(
this
)
MachNopNode
();
b
->
_nodes
.
insert
(
j
++
,
nop
);
_cfg
->
_bbs
.
map
(
nop
->
_idx
,
b
);
_cfg
->
map_node_to_block
(
nop
,
b
);
last_inst
++
;
nop
->
emit
(
*
cb
,
_regalloc
);
cb
->
flush_bundle
(
true
);
...
...
@@ -1549,7 +1547,7 @@ void Compile::fill_buffer(CodeBuffer* cb, uint* blk_starts) {
if
(
padding
>
0
)
{
MachNode
*
nop
=
new
(
this
)
MachNopNode
(
padding
/
nop_size
);
b
->
_nodes
.
insert
(
b
->
_nodes
.
size
(),
nop
);
_cfg
->
_bbs
.
map
(
nop
->
_idx
,
b
);
_cfg
->
map_node_to_block
(
nop
,
b
);
nop
->
emit
(
*
cb
,
_regalloc
);
current_offset
=
cb
->
insts_size
();
}
...
...
@@ -1737,7 +1735,6 @@ uint Scheduling::_total_instructions_per_bundle[Pipeline::_max_instrs_per_cycle+
Scheduling
::
Scheduling
(
Arena
*
arena
,
Compile
&
compile
)
:
_arena
(
arena
),
_cfg
(
compile
.
cfg
()),
_bbs
(
compile
.
cfg
()
->
_bbs
),
_regalloc
(
compile
.
regalloc
()),
_reg_node
(
arena
),
_bundle_instr_count
(
0
),
...
...
@@ -2085,8 +2082,9 @@ void Scheduling::DecrementUseCounts(Node *n, const Block *bb) {
if
(
def
->
is_Proj
()
)
// If this is a machine projection, then
def
=
def
->
in
(
0
);
// propagate usage thru to the base instruction
if
(
_bbs
[
def
->
_idx
]
!=
bb
)
// Ignore if not block-local
if
(
_cfg
->
get_block_for_node
(
def
)
!=
bb
)
{
// Ignore if not block-local
continue
;
}
// Compute the latency
uint
l
=
_bundle_cycle_number
+
n
->
latency
(
i
);
...
...
@@ -2358,9 +2356,10 @@ void Scheduling::ComputeUseCount(const Block *bb) {
Node
*
inp
=
n
->
in
(
k
);
if
(
!
inp
)
continue
;
assert
(
inp
!=
n
,
"no cycles allowed"
);
if
(
_bbs
[
inp
->
_idx
]
==
bb
)
{
// Block-local use?
if
(
inp
->
is_Proj
()
)
// Skip through Proj's
if
(
_cfg
->
get_block_for_node
(
inp
)
==
bb
)
{
// Block-local use?
if
(
inp
->
is_Proj
())
{
// Skip through Proj's
inp
=
inp
->
in
(
0
);
}
++
_uses
[
inp
->
_idx
];
// Count 1 block-local use
}
}
...
...
@@ -2643,7 +2642,7 @@ void Scheduling::anti_do_def( Block *b, Node *def, OptoReg::Name def_reg, int is
return
;
Node
*
pinch
=
_reg_node
[
def_reg
];
// Get pinch point
if
(
!
pinch
||
_bbs
[
pinch
->
_idx
]
!=
b
||
// No pinch-point yet?
if
((
pinch
==
NULL
)
||
_cfg
->
get_block_for_node
(
pinch
)
!=
b
||
// No pinch-point yet?
is_def
)
{
// Check for a true def (not a kill)
_reg_node
.
map
(
def_reg
,
def
);
// Record def/kill as the optimistic pinch-point
return
;
...
...
@@ -2669,7 +2668,7 @@ void Scheduling::anti_do_def( Block *b, Node *def, OptoReg::Name def_reg, int is
_cfg
->
C
->
record_method_not_compilable
(
"too many D-U pinch points"
);
return
;
}
_
bbs
.
map
(
pinch
->
_idx
,
b
);
// Pretend it's valid in this block (lazy init)
_
cfg
->
map_node_to_block
(
pinch
,
b
);
// Pretend it's valid in this block (lazy init)
_reg_node
.
map
(
def_reg
,
pinch
);
// Record pinch-point
//_regalloc->set_bad(pinch->_idx); // Already initialized this way.
if
(
later_def
->
outcnt
()
==
0
||
later_def
->
ideal_reg
()
==
MachProjNode
::
fat_proj
)
{
// Distinguish def from kill
...
...
@@ -2713,9 +2712,9 @@ void Scheduling::anti_do_use( Block *b, Node *use, OptoReg::Name use_reg ) {
return
;
Node
*
pinch
=
_reg_node
[
use_reg
];
// Get pinch point
// Check for no later def_reg/kill in block
if
(
pinch
&&
_bbs
[
pinch
->
_idx
]
==
b
&&
if
((
pinch
!=
NULL
)
&&
_cfg
->
get_block_for_node
(
pinch
)
==
b
&&
// Use has to be block-local as well
_
bbs
[
use
->
_idx
]
==
b
)
{
_
cfg
->
get_block_for_node
(
use
)
==
b
)
{
if
(
pinch
->
Opcode
()
==
Op_Node
&&
// Real pinch-point (not optimistic?)
pinch
->
req
()
==
1
)
{
// pinch not yet in block?
pinch
->
del_req
(
0
);
// yank pointer to later-def, also set flag
...
...
@@ -2895,7 +2894,7 @@ void Scheduling::garbage_collect_pinch_nodes() {
int
trace_cnt
=
0
;
for
(
uint
k
=
0
;
k
<
_reg_node
.
Size
();
k
++
)
{
Node
*
pinch
=
_reg_node
[
k
];
if
(
pinch
!=
NULL
&&
pinch
->
Opcode
()
==
Op_Node
&&
if
(
(
pinch
!=
NULL
)
&&
pinch
->
Opcode
()
==
Op_Node
&&
// no predecence input edges
(
pinch
->
req
()
==
pinch
->
len
()
||
pinch
->
in
(
pinch
->
req
())
==
NULL
)
)
{
cleanup_pinch
(
pinch
);
...
...
src/share/vm/opto/output.hpp
浏览文件 @
3686ad20
...
...
@@ -96,9 +96,6 @@ private:
// List of nodes currently available for choosing for scheduling
Node_List
_available
;
// Mapping from node (index) to basic block
Block_Array
&
_bbs
;
// For each instruction beginning a bundle, the number of following
// nodes to be bundled with it.
Bundle
*
_node_bundling_base
;
...
...
src/share/vm/opto/postaloc.cpp
浏览文件 @
3686ad20
...
...
@@ -78,11 +78,13 @@ bool PhaseChaitin::may_be_copy_of_callee( Node *def ) const {
// Helper function for yank_if_dead
int
PhaseChaitin
::
yank
(
Node
*
old
,
Block
*
current_block
,
Node_List
*
value
,
Node_List
*
regnd
)
{
int
blk_adjust
=
0
;
Block
*
oldb
=
_cfg
.
_bbs
[
old
->
_idx
]
;
Block
*
oldb
=
_cfg
.
get_block_for_node
(
old
)
;
oldb
->
find_remove
(
old
);
// Count 1 if deleting an instruction from the current block
if
(
oldb
==
current_block
)
blk_adjust
++
;
_cfg
.
_bbs
.
map
(
old
->
_idx
,
NULL
);
if
(
oldb
==
current_block
)
{
blk_adjust
++
;
}
_cfg
.
unmap_node_from_block
(
old
);
OptoReg
::
Name
old_reg
=
lrgs
(
_lrg_map
.
live_range_id
(
old
)).
reg
();
if
(
regnd
&&
(
*
regnd
)[
old_reg
]
==
old
)
{
// Instruction is currently available?
value
->
map
(
old_reg
,
NULL
);
// Yank from value/regnd maps
...
...
@@ -433,7 +435,7 @@ void PhaseChaitin::post_allocate_copy_removal() {
bool
missing_some_inputs
=
false
;
Block
*
freed
=
NULL
;
for
(
j
=
1
;
j
<
b
->
num_preds
();
j
++
)
{
Block
*
pb
=
_cfg
.
_bbs
[
b
->
pred
(
j
)
->
_idx
]
;
Block
*
pb
=
_cfg
.
get_block_for_node
(
b
->
pred
(
j
))
;
// Remove copies along phi edges
for
(
uint
k
=
1
;
k
<
phi_dex
;
k
++
)
elide_copy
(
b
->
_nodes
[
k
],
j
,
b
,
*
blk2value
[
pb
->
_pre_order
],
*
blk2regnd
[
pb
->
_pre_order
],
false
);
...
...
@@ -478,7 +480,7 @@ void PhaseChaitin::post_allocate_copy_removal() {
}
else
{
if
(
!
freed
)
{
// Didn't get a freebie prior block
// Must clone some data
freed
=
_cfg
.
_bbs
[
b
->
pred
(
1
)
->
_idx
]
;
freed
=
_cfg
.
get_block_for_node
(
b
->
pred
(
1
))
;
Node_List
&
f_value
=
*
blk2value
[
freed
->
_pre_order
];
Node_List
&
f_regnd
=
*
blk2regnd
[
freed
->
_pre_order
];
for
(
uint
k
=
0
;
k
<
(
uint
)
_max_reg
;
k
++
)
{
...
...
@@ -488,7 +490,7 @@ void PhaseChaitin::post_allocate_copy_removal() {
}
// Merge all inputs together, setting to NULL any conflicts.
for
(
j
=
1
;
j
<
b
->
num_preds
();
j
++
)
{
Block
*
pb
=
_cfg
.
_bbs
[
b
->
pred
(
j
)
->
_idx
]
;
Block
*
pb
=
_cfg
.
get_block_for_node
(
b
->
pred
(
j
))
;
if
(
pb
==
freed
)
continue
;
// Did self already via freelist
Node_List
&
p_regnd
=
*
blk2regnd
[
pb
->
_pre_order
];
for
(
uint
k
=
0
;
k
<
(
uint
)
_max_reg
;
k
++
)
{
...
...
@@ -515,8 +517,9 @@ void PhaseChaitin::post_allocate_copy_removal() {
u
=
u
?
NodeSentinel
:
x
;
// Capture unique input, or NodeSentinel for 2nd input
}
if
(
u
!=
NodeSentinel
)
{
// Junk Phi. Remove
b
->
_nodes
.
remove
(
j
--
);
phi_dex
--
;
_cfg
.
_bbs
.
map
(
phi
->
_idx
,
NULL
);
b
->
_nodes
.
remove
(
j
--
);
phi_dex
--
;
_cfg
.
unmap_node_from_block
(
phi
);
phi
->
replace_by
(
u
);
phi
->
disconnect_inputs
(
NULL
,
C
);
continue
;
...
...
src/share/vm/opto/reg_split.cpp
浏览文件 @
3686ad20
...
...
@@ -132,7 +132,7 @@ void PhaseChaitin::insert_proj( Block *b, uint i, Node *spill, uint maxlrg ) {
}
b
->
_nodes
.
insert
(
i
,
spill
);
// Insert node in block
_cfg
.
_bbs
.
map
(
spill
->
_idx
,
b
);
// Update node->block mapping to reflect
_cfg
.
map_node_to_block
(
spill
,
b
);
// Update node->block mapping to reflect
// Adjust the point where we go hi-pressure
if
(
i
<=
b
->
_ihrp_index
)
b
->
_ihrp_index
++
;
if
(
i
<=
b
->
_fhrp_index
)
b
->
_fhrp_index
++
;
...
...
@@ -219,7 +219,7 @@ uint PhaseChaitin::split_USE( Node *def, Block *b, Node *use, uint useidx, uint
use
->
set_req
(
useidx
,
def
);
}
else
{
// Block and index where the use occurs.
Block
*
b
=
_cfg
.
_bbs
[
use
->
_idx
]
;
Block
*
b
=
_cfg
.
get_block_for_node
(
use
)
;
// Put the clone just prior to use
int
bindex
=
b
->
find_node
(
use
);
// DEF is UP, so must copy it DOWN and hook in USE
...
...
@@ -270,7 +270,7 @@ uint PhaseChaitin::split_USE( Node *def, Block *b, Node *use, uint useidx, uint
int
bindex
;
// Phi input spill-copys belong at the end of the prior block
if
(
use
->
is_Phi
()
)
{
b
=
_cfg
.
_bbs
[
b
->
pred
(
useidx
)
->
_idx
]
;
b
=
_cfg
.
get_block_for_node
(
b
->
pred
(
useidx
))
;
bindex
=
b
->
end_idx
();
}
else
{
// Put the clone just prior to use
...
...
@@ -335,7 +335,7 @@ Node *PhaseChaitin::split_Rematerialize( Node *def, Block *b, uint insidx, uint
continue
;
}
Block
*
b_def
=
_cfg
.
_bbs
[
def
->
_idx
]
;
Block
*
b_def
=
_cfg
.
get_block_for_node
(
def
)
;
int
idx_def
=
b_def
->
find_node
(
def
);
Node
*
in_spill
=
get_spillcopy_wide
(
in
,
def
,
i
);
if
(
!
in_spill
)
return
0
;
// Bailed out
...
...
@@ -589,7 +589,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
UPblock
[
slidx
]
=
true
;
// Record following instruction in case 'n' rematerializes and
// kills flags
Block
*
pred1
=
_cfg
.
_bbs
[
b
->
pred
(
1
)
->
_idx
]
;
Block
*
pred1
=
_cfg
.
get_block_for_node
(
b
->
pred
(
1
))
;
continue
;
}
...
...
@@ -601,7 +601,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
// Grab predecessor block header
n1
=
b
->
pred
(
1
);
// Grab the appropriate reaching def info for inpidx
pred
=
_cfg
.
_bbs
[
n1
->
_idx
]
;
pred
=
_cfg
.
get_block_for_node
(
n1
)
;
pidx
=
pred
->
_pre_order
;
Node
**
Ltmp
=
Reaches
[
pidx
];
bool
*
Utmp
=
UP
[
pidx
];
...
...
@@ -616,7 +616,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
// Grab predecessor block headers
n2
=
b
->
pred
(
inpidx
);
// Grab the appropriate reaching def info for inpidx
pred
=
_cfg
.
_bbs
[
n2
->
_idx
]
;
pred
=
_cfg
.
get_block_for_node
(
n2
)
;
pidx
=
pred
->
_pre_order
;
Ltmp
=
Reaches
[
pidx
];
Utmp
=
UP
[
pidx
];
...
...
@@ -701,7 +701,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
// Grab predecessor block header
n1
=
b
->
pred
(
1
);
// Grab the appropriate reaching def info for k
pred
=
_cfg
.
_bbs
[
n1
->
_idx
]
;
pred
=
_cfg
.
get_block_for_node
(
n1
)
;
pidx
=
pred
->
_pre_order
;
Node
**
Ltmp
=
Reaches
[
pidx
];
bool
*
Utmp
=
UP
[
pidx
];
...
...
@@ -919,7 +919,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
return
0
;
}
_lrg_map
.
extend
(
def
->
_idx
,
0
);
_cfg
.
_bbs
.
map
(
def
->
_idx
,
b
);
_cfg
.
map_node_to_block
(
def
,
b
);
n
->
set_req
(
inpidx
,
def
);
continue
;
}
...
...
@@ -1291,7 +1291,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
for
(
insidx
=
0
;
insidx
<
phis
->
size
();
insidx
++
)
{
Node
*
phi
=
phis
->
at
(
insidx
);
assert
(
phi
->
is_Phi
(),
"This list must only contain Phi Nodes"
);
Block
*
b
=
_cfg
.
_bbs
[
phi
->
_idx
]
;
Block
*
b
=
_cfg
.
get_block_for_node
(
phi
)
;
// Grab the live range number
uint
lidx
=
_lrg_map
.
find_id
(
phi
);
uint
slidx
=
lrg2reach
[
lidx
];
...
...
@@ -1315,7 +1315,7 @@ uint PhaseChaitin::Split(uint maxlrg, ResourceArea* split_arena) {
// DEF has the wrong UP/DOWN value.
for
(
uint
i
=
1
;
i
<
b
->
num_preds
();
i
++
)
{
// Get predecessor block pre-order number
Block
*
pred
=
_cfg
.
_bbs
[
b
->
pred
(
i
)
->
_idx
]
;
Block
*
pred
=
_cfg
.
get_block_for_node
(
b
->
pred
(
i
))
;
pidx
=
pred
->
_pre_order
;
// Grab reaching def
Node
*
def
=
Reaches
[
pidx
][
slidx
];
...
...
src/share/vm/runtime/vmStructs.cpp
浏览文件 @
3686ad20
...
...
@@ -1098,7 +1098,7 @@ typedef BinaryTreeDictionary<Metablock, FreeList> MetablockTreeDictionary;
\
c2_nonstatic_field(PhaseCFG, _num_blocks, uint) \
c2_nonstatic_field(PhaseCFG, _blocks, Block_List) \
c2_nonstatic_field(PhaseCFG, _
bbs,
Block_Array) \
c2_nonstatic_field(PhaseCFG, _
node_to_block_mapping,
Block_Array) \
c2_nonstatic_field(PhaseCFG, _broot, Block*) \
\
c2_nonstatic_field(PhaseRegAlloc, _node_regs, OptoRegPair*) \
...
...
test/compiler/whitebox/ClearMethodStateTest.java
浏览文件 @
3686ad20
...
...
@@ -26,7 +26,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build ClearMethodStateTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -Xmixed -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI ClearMethodStateTest
* @run main/othervm -Xbootclasspath/a:. -Xmixed -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
ClearMethodStateTest
* @summary testing of WB::clearMethodState()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/CompilerWhiteBoxTest.java
浏览文件 @
3686ad20
...
...
@@ -61,6 +61,9 @@ public abstract class CompilerWhiteBoxTest {
/** Value of {@code -XX:TieredStopAtLevel} */
protected
static
final
int
TIERED_STOP_AT_LEVEL
=
Integer
.
parseInt
(
getVMOption
(
"TieredStopAtLevel"
,
"0"
));
/** Flag for verbose output, true if {@code -Dverbose} specified */
protected
static
final
boolean
IS_VERBOSE
=
System
.
getProperty
(
"verbose"
)
!=
null
;
/**
* Returns value of VM option.
...
...
@@ -268,7 +271,9 @@ public abstract class CompilerWhiteBoxTest {
}
result
+=
tmp
==
null
?
0
:
tmp
;
}
System
.
out
.
println
(
"method was invoked "
+
count
+
" times"
);
if
(
IS_VERBOSE
)
{
System
.
out
.
println
(
"method was invoked "
+
count
+
" times"
);
}
return
result
;
}
}
...
...
test/compiler/whitebox/DeoptimizeAllTest.java
浏览文件 @
3686ad20
...
...
@@ -26,7 +26,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build DeoptimizeAllTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI DeoptimizeAllTest
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
DeoptimizeAllTest
* @summary testing of WB::deoptimizeAll()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/DeoptimizeMethodTest.java
浏览文件 @
3686ad20
...
...
@@ -26,7 +26,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build DeoptimizeMethodTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI DeoptimizeMethodTest
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
DeoptimizeMethodTest
* @summary testing of WB::deoptimizeMethod()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/EnqueueMethodForCompilationTest.java
浏览文件 @
3686ad20
...
...
@@ -26,7 +26,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build EnqueueMethodForCompilationTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -Xmixed -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI EnqueueMethodForCompilationTest
* @run main/othervm -Xbootclasspath/a:. -Xmixed -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
EnqueueMethodForCompilationTest
* @summary testing of WB::enqueueMethodForCompilation()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/IsMethodCompilableTest.java
浏览文件 @
3686ad20
...
...
@@ -27,7 +27,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build IsMethodCompilableTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm/timeout=600 -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI IsMethodCompilableTest
* @run main/othervm/timeout=600 -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
IsMethodCompilableTest
* @summary testing of WB::isMethodCompilable()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/MakeMethodNotCompilableTest.java
浏览文件 @
3686ad20
...
...
@@ -27,7 +27,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build MakeMethodNotCompilableTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI MakeMethodNotCompilableTest
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
MakeMethodNotCompilableTest
* @summary testing of WB::makeMethodNotCompilable()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/SetDontInlineMethodTest.java
浏览文件 @
3686ad20
...
...
@@ -26,7 +26,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build SetDontInlineMethodTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI SetDontInlineMethodTest
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
SetDontInlineMethodTest
* @summary testing of WB::testSetDontInlineMethod()
* @author igor.ignatyev@oracle.com
*/
...
...
test/compiler/whitebox/SetForceInlineMethodTest.java
浏览文件 @
3686ad20
...
...
@@ -26,7 +26,7 @@
* @library /testlibrary /testlibrary/whitebox
* @build SetForceInlineMethodTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI SetForceInlineMethodTest
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI
-XX:CompileCommand=compileonly,TestCase$Helper::*
SetForceInlineMethodTest
* @summary testing of WB::testSetForceInlineMethod()
* @author igor.ignatyev@oracle.com
*/
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录