提交 07f7626d 编写于 作者: K kvn

Merge

<html>
<head>
<title>
C2 Replay
Replay
</title>
</head>
<body>
<h1>C2 compiler replay</h1>
<h1>Compiler replay</h1>
<p>
The C2 compiler replay is a function to repeat the compiling process from a crashed java process in compiled method<br>
The compiler replay is a function to repeat the compiling process from a crashed java process in compiled method<br>
This function only exists in debug version of VM
</p>
<h2>Usage</h2>
<pre>
First, use SA to attach to the core file, if suceeded, do
clhsdb>dumpreplaydata <address> | -a | <thread_id> [> replay.txt]
<pre>
First, use SA to attach to the core file, if succeeded, do
hsdb&gt; dumpreplaydata &lt;address&gt; | -a | &lt;thread_id&gt; [&gt; replay.txt]
create file replay.txt, address is address of Method, or nmethod(CodeBlob)
clhsdb>buildreplayjars [all | boot | app]
hsdb&gt; buildreplayjars [all | boot | app]
create files:
all:
app.jar, boot.jar
......@@ -26,16 +26,16 @@ First, use SA to attach to the core file, if suceeded, do
app.jar
exit SA now.
Second, use the obtained replay text file, replay.txt and jar files, app.jar and boot.jar, using debug version of java
java -Xbootclasspath/p:boot.jar -cp app.jar -XX:ReplayDataFile=<datafile> -XX:+ReplayCompiles ....
java -Xbootclasspath/p:boot.jar -cp app.jar -XX:ReplayDataFile=&lt;datafile&gt; -XX:+ReplayCompiles ....
This will replay the compiling process.
With ReplayCompiles, the replay will recompile all the methods in app.jar, and in boot.jar to emulate the process in java app.
notes:
1) Most time, we don't need the boot.jar which is the classes loaded from JDK. It will be only modified when an agent(JVMDI) is running and modifies the classes.
2) If encounter error as "<flag>" not found, that means the SA is using a VMStructs which is different from the one with corefile. In this case, SA has a utility tool vmstructsdump which is located at agent/src/os/<os>/proc/<os_platform>
2) If encounter error as "&lt;flag&gt;" not found, that means the SA is using a VMStructs which is different from the one with corefile. In this case, SA has a utility tool vmstructsdump which is located at agent/src/os/&lt;os&gt;/proc/&lt;os_platform&gt;
Use this tool to dump VM type library:
vmstructsdump libjvm.so > <type_name>.db
vmstructsdump libjvm.so &gt; &lt;type_name&gt;.db
set env SA_TYPEDB=<type_name>.db (refer different shell for set envs)
set env SA_TYPEDB=&lt;type_name&gt;.db (refer different shell for set envs)
......@@ -15,7 +15,7 @@ GUI tools. Command line HSDB (CLHSDB) tool is alternative to SA GUI tool HSDB.
<p>
There is also JavaScript based SA command line interface called <a href="jsdb.html">jsdb</a>.
But, CLHSDB supports Unix shell-like (or dbx/gdb-like) command line interface with
support for output redirection/appending (familiar >, >>), command history and so on.
support for output redirection/appending (familiar &gt;, &gt;&gt;), command history and so on.
Each CLHSDB command can have zero or more arguments and optionally end with output redirection
(or append) to a file. Commands may be stored in a file and run using <b>source</b> command.
<b>help</b> command prints usage message for all supported commands (or a specific command)
......@@ -49,7 +49,7 @@ Available commands:
dumpheap [ file ] <font color="red">dump heap in hprof binary format</font>
dumpideal -a | id <font color="red">dump ideal graph like debug flag -XX:+PrintIdeal</font>
dumpilt -a | id <font color="red">dump inline tree for C2 compilation</font>
dumpreplaydata <address> | -a | <thread_id> [>replay.txt] <font color="red">dump replay data into a file</font>
dumpreplaydata &lt;address&gt; | -a | &lt;thread_id&gt; [&gt;replay.txt] <font color="red">dump replay data into a file</font>
echo [ true | false ] <font color="red">turn on/off command echo mode</font>
examine [ address/count ] | [ address,address] <font color="red">show contents of memory from given address</font>
field [ type [ name fieldtype isStatic offset address ] ] <font color="red">print info about a field of HotSpot type</font>
......@@ -96,11 +96,11 @@ Available commands:
<h3>JavaScript integration</h3>
<p>Few CLHSDB commands are already implemented in JavaScript. It is possible to extend CLHSDB command set
<p>Few CLHSDB commands are already implemented in JavaScript. It is possible to extend CLHSDB command set
by implementing more commands in a JavaScript file and by loading it by <b>jsload</b> command. <b>jseval</b>
command may be used to evaluate arbitrary JavaScript expression from a string. Any JavaScript function
may be exposed as a CLHSDB command by registering it using JavaScript <b><code>registerCommand</code></b>
function. This function accepts command name, usage and name of the JavaScript implementation function
function. This function accepts command name, usage and name of the JavaScript implementation function
as arguments.
</p>
......@@ -127,11 +127,11 @@ hsdb&gt; jsload test.js
</code>
</pre>
<h3>C2 Compilation Replay</h3>
<h3>Compilation Replay</h3>
<p>
When a java process crashes in compiled method, usually a core file is saved.
The C2 replay function can reproduce the compiling process in the core.
<a href="c2replay.html">c2replay.html</a>
The replay function can reproduce the compiling process in the core.
<a href="cireplay.html">cireplay.html</a>
</body>
</html>
......@@ -93,10 +93,11 @@ public class ciEnv extends VMObject {
CompileTask task = task();
Method method = task.method();
int entryBci = task.osrBci();
int compLevel = task.compLevel();
Klass holder = method.getMethodHolder();
out.println("compile " + holder.getName().asString() + " " +
OopUtilities.escapeString(method.getName().asString()) + " " +
method.getSignature().asString() + " " +
entryBci);
entryBci + " " + compLevel);
}
}
......@@ -78,6 +78,8 @@ public class NMethod extends CodeBlob {
current sweep traversal index. */
private static CIntegerField stackTraversalMarkField;
private static CIntegerField compLevelField;
static {
VM.registerVMInitializedObserver(new Observer() {
public void update(Observable o, Object data) {
......@@ -113,7 +115,7 @@ public class NMethod extends CodeBlob {
osrEntryPointField = type.getAddressField("_osr_entry_point");
lockCountField = type.getJIntField("_lock_count");
stackTraversalMarkField = type.getCIntegerField("_stack_traversal_mark");
compLevelField = type.getCIntegerField("_comp_level");
pcDescSize = db.lookupType("PcDesc").getSize();
}
......@@ -530,7 +532,7 @@ public class NMethod extends CodeBlob {
out.println("compile " + holder.getName().asString() + " " +
OopUtilities.escapeString(method.getName().asString()) + " " +
method.getSignature().asString() + " " +
getEntryBCI());
getEntryBCI() + " " + getCompLevel());
}
......@@ -551,4 +553,5 @@ public class NMethod extends CodeBlob {
private int getHandlerTableOffset() { return (int) handlerTableOffsetField.getValue(addr); }
private int getNulChkTableOffset() { return (int) nulChkTableOffsetField .getValue(addr); }
private int getNMethodEndOffset() { return (int) nmethodEndOffsetField .getValue(addr); }
private int getCompLevel() { return (int) compLevelField .getValue(addr); }
}
......@@ -46,10 +46,12 @@ public class CompileTask extends VMObject {
Type type = db.lookupType("CompileTask");
methodField = type.getAddressField("_method");
osrBciField = new CIntField(type.getCIntegerField("_osr_bci"), 0);
compLevelField = new CIntField(type.getCIntegerField("_comp_level"), 0);
}
private static AddressField methodField;
private static CIntField osrBciField;
private static CIntField compLevelField;
public CompileTask(Address addr) {
super(addr);
......@@ -63,4 +65,8 @@ public class CompileTask extends VMObject {
public int osrBci() {
return (int)osrBciField.getValue(getAddress());
}
public int compLevel() {
return (int)compLevelField.getValue(getAddress());
}
}
......@@ -1230,10 +1230,6 @@ bool os::dll_build_name(char* buffer, size_t buflen,
return retval;
}
const char* os::get_current_directory(char *buf, int buflen) {
return getcwd(buf, buflen);
}
// check if addr is inside libjvm.so
bool os::address_is_in_vm(address addr) {
static address libjvm_base_addr;
......
......@@ -1663,10 +1663,6 @@ bool os::dll_build_name(char* buffer, size_t buflen,
return retval;
}
const char* os::get_current_directory(char *buf, int buflen) {
return getcwd(buf, buflen);
}
// check if addr is inside libjvm.so
bool os::address_is_in_vm(address addr) {
static address libjvm_base_addr;
......
......@@ -251,3 +251,11 @@ bool os::has_allocatable_memory_limit(julong* limit) {
return true;
#endif
}
const char* os::get_current_directory(char *buf, size_t buflen) {
return getcwd(buf, buflen);
}
FILE* os::open(int fd, const char* mode) {
return ::fdopen(fd, mode);
}
......@@ -1916,10 +1916,6 @@ bool os::dll_build_name(char* buffer, size_t buflen,
return retval;
}
const char* os::get_current_directory(char *buf, int buflen) {
return getcwd(buf, buflen);
}
// check if addr is inside libjvm.so
bool os::address_is_in_vm(address addr) {
static address libjvm_base_addr;
......
......@@ -1221,8 +1221,10 @@ bool os::dll_build_name(char *buffer, size_t buflen,
// Needs to be in os specific directory because windows requires another
// header file <direct.h>
const char* os::get_current_directory(char *buf, int buflen) {
return _getcwd(buf, buflen);
const char* os::get_current_directory(char *buf, size_t buflen) {
int n = static_cast<int>(buflen);
if (buflen > INT_MAX) n = INT_MAX;
return _getcwd(buf, n);
}
//-----------------------------------------------------------
......@@ -4098,6 +4100,10 @@ int os::open(const char *path, int oflag, int mode) {
return ::open(pathbuf, oflag | O_BINARY | O_NOINHERIT, mode);
}
FILE* os::open(int fd, const char* mode) {
return ::_fdopen(fd, mode);
}
// Is a (classpath) directory empty?
bool os::dir_is_empty(const char* path) {
WIN32_FIND_DATA fd;
......
......@@ -1150,23 +1150,9 @@ void ciEnv::record_out_of_memory_failure() {
record_method_not_compilable("out of memory");
}
fileStream* ciEnv::_replay_data_stream = NULL;
void ciEnv::dump_replay_data() {
void ciEnv::dump_replay_data(outputStream* out) {
VM_ENTRY_MARK;
MutexLocker ml(Compile_lock);
if (_replay_data_stream == NULL) {
_replay_data_stream = new (ResourceObj::C_HEAP, mtCompiler) fileStream(ReplayDataFile);
if (_replay_data_stream == NULL) {
fatal(err_msg("Can't open %s for replay data", ReplayDataFile));
}
}
dump_replay_data(_replay_data_stream);
}
void ciEnv::dump_replay_data(outputStream* out) {
ASSERT_IN_VM;
ResourceMark rm;
#if INCLUDE_JVMTI
out->print_cr("JvmtiExport can_access_local_variables %d", _jvmti_can_access_local_variables);
......@@ -1179,13 +1165,15 @@ void ciEnv::dump_replay_data(outputStream* out) {
for (int i = 0; i < objects->length(); i++) {
objects->at(i)->dump_replay_data(out);
}
Method* method = task()->method();
int entry_bci = task()->osr_bci();
CompileTask* task = this->task();
Method* method = task->method();
int entry_bci = task->osr_bci();
int comp_level = task->comp_level();
// Klass holder = method->method_holder();
out->print_cr("compile %s %s %s %d",
out->print_cr("compile %s %s %s %d %d",
method->klass_name()->as_quoted_ascii(),
method->name()->as_quoted_ascii(),
method->signature()->as_quoted_ascii(),
entry_bci);
entry_bci, comp_level);
out->flush();
}
......@@ -46,8 +46,6 @@ class ciEnv : StackObj {
friend class CompileBroker;
friend class Dependencies; // for get_object, during logging
static fileStream* _replay_data_stream;
private:
Arena* _arena; // Alias for _ciEnv_arena except in init_shared_objects()
Arena _ciEnv_arena;
......@@ -451,10 +449,6 @@ public:
// RedefineClasses support
void metadata_do(void f(Metadata*)) { _factory->metadata_do(f); }
// Dump the compilation replay data for this ciEnv to
// ReplayDataFile, creating the file if needed.
void dump_replay_data();
// Dump the compilation replay data for the ciEnv to the stream.
void dump_replay_data(outputStream* out);
};
......
......@@ -196,7 +196,6 @@ class ciMethod : public ciMetadata {
// Analysis and profiling.
//
// Usage note: liveness_at_bci and init_vars should be wrapped in ResourceMarks.
bool uses_monitors() const { return _uses_monitors; } // this one should go away, it has a misleading name
bool has_monitor_bytecodes() const { return _uses_monitors; }
bool has_balanced_monitors();
......
......@@ -89,7 +89,7 @@ class CompileReplay : public StackObj {
loader = Handle(thread, SystemDictionary::java_system_loader());
stream = fopen(filename, "rt");
if (stream == NULL) {
fprintf(stderr, "Can't open replay file %s\n", filename);
fprintf(stderr, "ERROR: Can't open replay file %s\n", filename);
}
buffer_length = 32;
buffer = NEW_RESOURCE_ARRAY(char, buffer_length);
......@@ -327,7 +327,6 @@ class CompileReplay : public StackObj {
if (had_error()) {
tty->print_cr("Error while parsing line %d: %s\n", line_no, _error_message);
tty->print_cr("%s", buffer);
assert(false, "error");
return;
}
pos = 0;
......@@ -370,11 +369,47 @@ class CompileReplay : public StackObj {
}
}
// compile <klass> <name> <signature> <entry_bci>
// validation of comp_level
bool is_valid_comp_level(int comp_level) {
const int msg_len = 256;
char* msg = NULL;
if (!is_compile(comp_level)) {
msg = NEW_RESOURCE_ARRAY(char, msg_len);
jio_snprintf(msg, msg_len, "%d isn't compilation level", comp_level);
} else if (!TieredCompilation && (comp_level != CompLevel_highest_tier)) {
msg = NEW_RESOURCE_ARRAY(char, msg_len);
switch (comp_level) {
case CompLevel_simple:
jio_snprintf(msg, msg_len, "compilation level %d requires Client VM or TieredCompilation", comp_level);
break;
case CompLevel_full_optimization:
jio_snprintf(msg, msg_len, "compilation level %d requires Server VM", comp_level);
break;
default:
jio_snprintf(msg, msg_len, "compilation level %d requires TieredCompilation", comp_level);
}
}
if (msg != NULL) {
report_error(msg);
return false;
}
return true;
}
// compile <klass> <name> <signature> <entry_bci> <comp_level>
void process_compile(TRAPS) {
// methodHandle method;
Method* method = parse_method(CHECK);
int entry_bci = parse_int("entry_bci");
const char* comp_level_label = "comp_level";
int comp_level = parse_int(comp_level_label);
// old version w/o comp_level
if (had_error() && (error_message() == comp_level_label)) {
comp_level = CompLevel_full_optimization;
}
if (!is_valid_comp_level(comp_level)) {
return;
}
Klass* k = method->method_holder();
((InstanceKlass*)k)->initialize(THREAD);
if (HAS_PENDING_EXCEPTION) {
......@@ -389,12 +424,12 @@ class CompileReplay : public StackObj {
}
}
// Make sure the existence of a prior compile doesn't stop this one
nmethod* nm = (entry_bci != InvocationEntryBci) ? method->lookup_osr_nmethod_for(entry_bci, CompLevel_full_optimization, true) : method->code();
nmethod* nm = (entry_bci != InvocationEntryBci) ? method->lookup_osr_nmethod_for(entry_bci, comp_level, true) : method->code();
if (nm != NULL) {
nm->make_not_entrant();
}
replay_state = this;
CompileBroker::compile_method(method, entry_bci, CompLevel_full_optimization,
CompileBroker::compile_method(method, entry_bci, comp_level,
methodHandle(), 0, "replay", THREAD);
replay_state = NULL;
reset();
......@@ -551,7 +586,7 @@ class CompileReplay : public StackObj {
if (parsed_two_word == i) continue;
default:
ShouldNotReachHere();
fatal(err_msg_res("Unexpected tag: %d", cp->tag_at(i).value()));
break;
}
......@@ -819,6 +854,11 @@ int ciReplay::replay_impl(TRAPS) {
ReplaySuppressInitializers = 1;
}
if (FLAG_IS_DEFAULT(ReplayDataFile)) {
tty->print_cr("ERROR: no compiler replay data file specified (use -XX:ReplayDataFile=replay_pid12345.txt).");
return 1;
}
// Load and parse the replay data
CompileReplay rp(ReplayDataFile, THREAD);
int exit_code = 0;
......
......@@ -1345,9 +1345,10 @@ void ClassLoader::compile_the_world_in(char* name, Handle loader, TRAPS) {
tty->print_cr("CompileTheWorld (%d) : %s", _compile_the_world_class_counter, buffer);
// Preload all classes to get around uncommon traps
// Iterate over all methods in class
int comp_level = CompilationPolicy::policy()->initial_compile_level();
for (int n = 0; n < k->methods()->length(); n++) {
methodHandle m (THREAD, k->methods()->at(n));
if (CompilationPolicy::can_be_compiled(m)) {
if (CompilationPolicy::can_be_compiled(m, comp_level)) {
if (++_codecache_sweep_counter == CompileTheWorldSafepointInterval) {
// Give sweeper a chance to keep up with CTW
......@@ -1356,7 +1357,7 @@ void ClassLoader::compile_the_world_in(char* name, Handle loader, TRAPS) {
_codecache_sweep_counter = 0;
}
// Force compilation
CompileBroker::compile_method(m, InvocationEntryBci, CompilationPolicy::policy()->initial_compile_level(),
CompileBroker::compile_method(m, InvocationEntryBci, comp_level,
methodHandle(), 0, "CTW", THREAD);
if (HAS_PENDING_EXCEPTION) {
clear_pending_exception_if_not_oom(CHECK);
......
......@@ -463,8 +463,10 @@ void CodeCache::verify_perm_nmethods(CodeBlobClosure* f_or_null) {
}
#endif //PRODUCT
nmethod* CodeCache::find_and_remove_saved_code(Method* m) {
/**
* Remove and return nmethod from the saved code list in order to reanimate it.
*/
nmethod* CodeCache::reanimate_saved_code(Method* m) {
MutexLockerEx mu(CodeCache_lock, Mutex::_no_safepoint_check_flag);
nmethod* saved = _saved_nmethods;
nmethod* prev = NULL;
......@@ -479,7 +481,7 @@ nmethod* CodeCache::find_and_remove_saved_code(Method* m) {
saved->set_speculatively_disconnected(false);
saved->set_saved_nmethod_link(NULL);
if (PrintMethodFlushing) {
saved->print_on(tty, " ### nmethod is reconnected\n");
saved->print_on(tty, " ### nmethod is reconnected");
}
if (LogCompilation && (xtty != NULL)) {
ttyLocker ttyl;
......@@ -496,6 +498,9 @@ nmethod* CodeCache::find_and_remove_saved_code(Method* m) {
return NULL;
}
/**
* Remove nmethod from the saved code list in order to discard it permanently
*/
void CodeCache::remove_saved_code(nmethod* nm) {
// For conc swpr this will be called with CodeCache_lock taken by caller
assert_locked_or_safepoint(CodeCache_lock);
......@@ -529,7 +534,7 @@ void CodeCache::speculatively_disconnect(nmethod* nm) {
nm->set_saved_nmethod_link(_saved_nmethods);
_saved_nmethods = nm;
if (PrintMethodFlushing) {
nm->print_on(tty, " ### nmethod is speculatively disconnected\n");
nm->print_on(tty, " ### nmethod is speculatively disconnected");
}
if (LogCompilation && (xtty != NULL)) {
ttyLocker ttyl;
......
......@@ -57,7 +57,7 @@ class CodeCache : AllStatic {
static int _number_of_nmethods_with_dependencies;
static bool _needs_cache_clean;
static nmethod* _scavenge_root_nmethods; // linked via nm->scavenge_root_link()
static nmethod* _saved_nmethods; // linked via nm->saved_nmethod_look()
static nmethod* _saved_nmethods; // Linked list of speculatively disconnected nmethods.
static void verify_if_often() PRODUCT_RETURN;
......@@ -168,7 +168,7 @@ class CodeCache : AllStatic {
static void set_needs_cache_clean(bool v) { _needs_cache_clean = v; }
static void clear_inline_caches(); // clear all inline caches
static nmethod* find_and_remove_saved_code(Method* m);
static nmethod* reanimate_saved_code(Method* m);
static void remove_saved_code(nmethod* nm);
static void speculatively_disconnect(nmethod* nm);
......
......@@ -65,7 +65,7 @@ HS_DTRACE_PROBE_DECL8(hotspot, method__compile__begin,
HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
char*, intptr_t, char*, intptr_t, char*, intptr_t, char*, intptr_t, bool);
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler, method, comp_name) \
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, comp_name) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
......@@ -77,8 +77,7 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
signature->bytes(), signature->utf8_length()); \
}
#define DTRACE_METHOD_COMPILE_END_PROBE(compiler, method, \
comp_name, success) \
#define DTRACE_METHOD_COMPILE_END_PROBE(method, comp_name, success) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
......@@ -92,7 +91,7 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
#else /* USDT2 */
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler, method, comp_name) \
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, comp_name) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
......@@ -104,8 +103,7 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
(char *) signature->bytes(), signature->utf8_length()); \
}
#define DTRACE_METHOD_COMPILE_END_PROBE(compiler, method, \
comp_name, success) \
#define DTRACE_METHOD_COMPILE_END_PROBE(method, comp_name, success) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
......@@ -120,8 +118,8 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
#else // ndef DTRACE_ENABLED
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler, method, comp_name)
#define DTRACE_METHOD_COMPILE_END_PROBE(compiler, method, comp_name, success)
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, comp_name)
#define DTRACE_METHOD_COMPILE_END_PROBE(method, comp_name, success)
#endif // ndef DTRACE_ENABLED
......@@ -1229,7 +1227,7 @@ nmethod* CompileBroker::compile_method(methodHandle method, int osr_bci,
if (method->is_not_compilable(comp_level)) return NULL;
if (UseCodeCacheFlushing) {
nmethod* saved = CodeCache::find_and_remove_saved_code(method());
nmethod* saved = CodeCache::reanimate_saved_code(method());
if (saved != NULL) {
method->set_code(method, saved);
return saved;
......@@ -1288,9 +1286,9 @@ nmethod* CompileBroker::compile_method(methodHandle method, int osr_bci,
method->jmethod_id();
}
// If the compiler is shut off due to code cache flushing or otherwise,
// If the compiler is shut off due to code cache getting full
// fail out now so blocking compiles dont hang the java thread
if (!should_compile_new_jobs() || (UseCodeCacheFlushing && CodeCache::needs_flushing())) {
if (!should_compile_new_jobs()) {
CompilationPolicy::policy()->delay_compilation(method());
return NULL;
}
......@@ -1766,8 +1764,7 @@ void CompileBroker::invoke_compiler_on_method(CompileTask* task) {
// Save information about this method in case of failure.
set_last_compile(thread, method, is_osr, task_level);
DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler(task_level), method,
compiler_name(task_level));
DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, compiler_name(task_level));
}
// Allocate a new set of JNI handles.
......@@ -1842,13 +1839,14 @@ void CompileBroker::invoke_compiler_on_method(CompileTask* task) {
}
}
}
// simulate crash during compilation
assert(task->compile_id() != CICrashAt, "just as planned");
}
pop_jni_handle_block();
methodHandle method(thread, task->method());
DTRACE_METHOD_COMPILE_END_PROBE(compiler(task_level), method,
compiler_name(task_level), task->is_success());
DTRACE_METHOD_COMPILE_END_PROBE(method, compiler_name(task_level), task->is_success());
collect_statistics(thread, time, task);
......
......@@ -877,7 +877,7 @@ address Method::verified_code_entry() {
debug_only(No_Safepoint_Verifier nsv;)
nmethod *code = (nmethod *)OrderAccess::load_ptr_acquire(&_code);
if (code == NULL && UseCodeCacheFlushing) {
nmethod *saved_code = CodeCache::find_and_remove_saved_code(this);
nmethod *saved_code = CodeCache::reanimate_saved_code(this);
if (saved_code != NULL) {
methodHandle method(this);
assert( ! saved_code->is_osr_method(), "should not get here for osr" );
......
......@@ -439,9 +439,29 @@ JVM_ENTRY(void, JVM_RegisterWhiteBoxMethods(JNIEnv* env, jclass wbclass))
instanceKlassHandle ikh = instanceKlassHandle(JNIHandles::resolve(wbclass)->klass());
Handle loader(ikh->class_loader());
if (loader.is_null()) {
ResourceMark rm;
ThreadToNativeFromVM ttnfv(thread); // can't be in VM when we call JNI
jint result = env->RegisterNatives(wbclass, methods, sizeof(methods)/sizeof(methods[0]));
if (result == 0) {
bool result = true;
// one by one registration natives for exception catching
jclass exceptionKlass = env->FindClass(vmSymbols::java_lang_NoSuchMethodError()->as_C_string());
for (int i = 0, n = sizeof(methods) / sizeof(methods[0]); i < n; ++i) {
if (env->RegisterNatives(wbclass, methods + i, 1) != 0) {
result = false;
if (env->ExceptionCheck() && env->IsInstanceOf(env->ExceptionOccurred(), exceptionKlass)) {
// j.l.NoSuchMethodError is thrown when a method can't be found or a method is not native
// ignoring the exception
tty->print_cr("Warning: 'NoSuchMethodError' on register of sun.hotspot.WhiteBox::%s%s", methods[i].name, methods[i].signature);
env->ExceptionClear();
} else {
// register is failed w/o exception or w/ unexpected exception
tty->print_cr("Warning: unexpected error on register of sun.hotspot.WhiteBox::%s%s. All methods will be unregistered", methods[i].name, methods[i].signature);
env->UnregisterNatives(wbclass);
break;
}
}
}
if (result) {
WhiteBox::set_used();
}
}
......
......@@ -109,6 +109,9 @@ bool CompilationPolicy::must_be_compiled(methodHandle m, int comp_level) {
// Returns true if m is allowed to be compiled
bool CompilationPolicy::can_be_compiled(methodHandle m, int comp_level) {
// allow any levels for WhiteBox
assert(WhiteBoxAPI || comp_level == CompLevel_all || is_compile(comp_level), "illegal compilation level");
if (m->is_abstract()) return false;
if (DontCompileHugeMethods && m->code_size() > HugeMethodLimit) return false;
......@@ -122,7 +125,13 @@ bool CompilationPolicy::can_be_compiled(methodHandle m, int comp_level) {
return false;
}
if (comp_level == CompLevel_all) {
return !m->is_not_compilable(CompLevel_simple) && !m->is_not_compilable(CompLevel_full_optimization);
if (TieredCompilation) {
// enough to be compilable at any level for tiered
return !m->is_not_compilable(CompLevel_simple) || !m->is_not_compilable(CompLevel_full_optimization);
} else {
// must be compilable at available level for non-tiered
return !m->is_not_compilable(CompLevel_highest_tier);
}
} else if (is_compile(comp_level)) {
return !m->is_not_compilable(comp_level);
}
......@@ -436,7 +445,7 @@ void SimpleCompPolicy::method_invocation_event(methodHandle m, JavaThread* threa
reset_counter_for_invocation_event(m);
const char* comment = "count";
if (is_compilation_enabled() && can_be_compiled(m)) {
if (is_compilation_enabled() && can_be_compiled(m, comp_level)) {
nmethod* nm = m->code();
if (nm == NULL ) {
CompileBroker::compile_method(m, InvocationEntryBci, comp_level, m, hot_count, comment, thread);
......@@ -449,7 +458,7 @@ void SimpleCompPolicy::method_back_branch_event(methodHandle m, int bci, JavaThr
const int hot_count = m->backedge_count();
const char* comment = "backedge_count";
if (is_compilation_enabled() && !m->is_not_osr_compilable(comp_level) && can_be_compiled(m)) {
if (is_compilation_enabled() && !m->is_not_osr_compilable(comp_level) && can_be_compiled(m, comp_level)) {
CompileBroker::compile_method(m, bci, comp_level, m, hot_count, comment, thread);
NOT_PRODUCT(trace_osr_completion(m->lookup_osr_nmethod_for(bci, comp_level, true));)
}
......@@ -467,7 +476,7 @@ void StackWalkCompPolicy::method_invocation_event(methodHandle m, JavaThread* th
reset_counter_for_invocation_event(m);
const char* comment = "count";
if (is_compilation_enabled() && m->code() == NULL && can_be_compiled(m)) {
if (is_compilation_enabled() && m->code() == NULL && can_be_compiled(m, comp_level)) {
ResourceMark rm(thread);
frame fr = thread->last_frame();
assert(fr.is_interpreted_frame(), "must be interpreted");
......@@ -505,7 +514,7 @@ void StackWalkCompPolicy::method_back_branch_event(methodHandle m, int bci, Java
const int hot_count = m->backedge_count();
const char* comment = "backedge_count";
if (is_compilation_enabled() && !m->is_not_osr_compilable(comp_level) && can_be_compiled(m)) {
if (is_compilation_enabled() && !m->is_not_osr_compilable(comp_level) && can_be_compiled(m, comp_level)) {
CompileBroker::compile_method(m, bci, comp_level, m, hot_count, comment, thread);
NOT_PRODUCT(trace_osr_completion(m->lookup_osr_nmethod_for(bci, comp_level, true));)
}
......@@ -600,7 +609,7 @@ RFrame* StackWalkCompPolicy::findTopInlinableFrame(GrowableArray<RFrame*>* stack
// If the caller method is too big or something then we do not want to
// compile it just to inline a method
if (!can_be_compiled(next_m)) {
if (!can_be_compiled(next_m, CompLevel_any)) {
msg = "caller cannot be compiled";
break;
}
......
......@@ -3182,6 +3182,9 @@ class CommandLineFlags {
product(uintx, CodeCacheFlushingMinimumFreeSpace, 1500*K, \
"When less than X space left, start code cache cleaning") \
\
product(uintx, CodeCacheFlushingFraction, 2, \
"Fraction of the code cache that is flushed when full") \
\
/* interpreter debugging */ \
develop(intx, BinarySwitchThreshold, 5, \
"Minimal number of lookupswitch entries for rewriting to binary " \
......@@ -3226,8 +3229,9 @@ class CommandLineFlags {
develop(bool, ReplayCompiles, false, \
"Enable replay of compilations from ReplayDataFile") \
\
develop(ccstr, ReplayDataFile, "replay.txt", \
"file containing compilation replay information") \
product(ccstr, ReplayDataFile, NULL, \
"File containing compilation replay information" \
"[default: ./replay_pid%p.log] (%p replaced with pid)") \
\
develop(intx, ReplaySuppressInitializers, 2, \
"Controls handling of class initialization during replay" \
......@@ -3240,8 +3244,8 @@ class CommandLineFlags {
develop(bool, ReplayIgnoreInitErrors, false, \
"Ignore exceptions thrown during initialization for replay") \
\
develop(bool, DumpReplayDataOnError, true, \
"record replay data for crashing compiler threads") \
product(bool, DumpReplayDataOnError, true, \
"Record replay data for crashing compiler threads") \
\
product(bool, CICompilerCountPerCPU, false, \
"1 compiler thread for log(N CPUs)") \
......@@ -3250,7 +3254,9 @@ class CommandLineFlags {
"Fire OutOfMemoryErrors throughout CI for testing the compiler " \
"(non-negative value throws OOM after this many CI accesses " \
"in each compile)") \
\
notproduct(intx, CICrashAt, -1, \
"id of compilation to trigger assert in compiler thread for " \
"the purpose of testing, e.g. generation of replay data") \
notproduct(bool, CIObjectFactoryVerify, false, \
"enable potentially expensive verification in ciObjectFactory") \
\
......
......@@ -454,6 +454,7 @@ class os: AllStatic {
// File i/o operations
static const int default_file_open_flags();
static int open(const char *path, int oflag, int mode);
static FILE* open(int fd, const char* mode);
static int close(int fd);
static jlong lseek(int fd, jlong offset, int whence);
static char* native_path(char *path);
......@@ -477,7 +478,7 @@ class os: AllStatic {
static const char* dll_file_extension();
static const char* get_temp_directory();
static const char* get_current_directory(char *buf, int buflen);
static const char* get_current_directory(char *buf, size_t buflen);
// Builds a platform-specific full library path given a ld path and lib name
// Returns true if buffer contains full path to existing file, false otherwise
......
......@@ -1316,12 +1316,6 @@ JRT_BLOCK_ENTRY(address, SharedRuntime::handle_wrong_method(JavaThread* thread))
assert(stub_frame.is_runtime_frame(), "sanity check");
frame caller_frame = stub_frame.sender(&reg_map);
// MethodHandle invokes don't have a CompiledIC and should always
// simply redispatch to the callee_target.
address sender_pc = caller_frame.pc();
CodeBlob* sender_cb = caller_frame.cb();
nmethod* sender_nm = sender_cb->as_nmethod_or_null();
if (caller_frame.is_interpreted_frame() ||
caller_frame.is_entry_frame()) {
Method* callee = thread->callee_target();
......
......@@ -154,9 +154,10 @@ void SimpleThresholdPolicy::set_carry_if_necessary(InvocationCounter *counter) {
// Set carry flags on the counters if necessary
void SimpleThresholdPolicy::handle_counter_overflow(Method* method) {
MethodCounters *mcs = method->method_counters();
assert(mcs != NULL, "");
set_carry_if_necessary(mcs->invocation_counter());
set_carry_if_necessary(mcs->backedge_counter());
if (mcs != NULL) {
set_carry_if_necessary(mcs->invocation_counter());
set_carry_if_necessary(mcs->backedge_counter());
}
MethodData* mdo = method->method_data();
if (mdo != NULL) {
set_carry_if_necessary(mdo->invocation_counter());
......
......@@ -136,13 +136,12 @@ volatile int NMethodSweeper::_sweep_started = 0; // Whether a sweep is in progre
jint NMethodSweeper::_locked_seen = 0;
jint NMethodSweeper::_not_entrant_seen_on_stack = 0;
bool NMethodSweeper::_rescan = false;
bool NMethodSweeper::_do_sweep = false;
bool NMethodSweeper::_was_full = false;
jint NMethodSweeper::_advise_to_sweep = 0;
jlong NMethodSweeper::_last_was_full = 0;
uint NMethodSweeper::_highest_marked = 0;
long NMethodSweeper::_was_full_traversal = 0;
bool NMethodSweeper::_resweep = false;
jint NMethodSweeper::_flush_token = 0;
jlong NMethodSweeper::_last_full_flush_time = 0;
int NMethodSweeper::_highest_marked = 0;
int NMethodSweeper::_dead_compile_ids = 0;
long NMethodSweeper::_last_flush_traversal_id = 0;
class MarkActivationClosure: public CodeBlobClosure {
public:
......@@ -155,20 +154,16 @@ public:
};
static MarkActivationClosure mark_activation_closure;
bool NMethodSweeper::sweep_in_progress() {
return (_current != NULL);
}
void NMethodSweeper::scan_stacks() {
assert(SafepointSynchronize::is_at_safepoint(), "must be executed at a safepoint");
if (!MethodFlushing) return;
_do_sweep = true;
// No need to synchronize access, since this is always executed at a
// safepoint. If we aren't in the middle of scan and a rescan
// hasn't been requested then just return. If UseCodeCacheFlushing is on and
// code cache flushing is in progress, don't skip sweeping to help make progress
// clearing space in the code cache.
if ((_current == NULL && !_rescan) && !(UseCodeCacheFlushing && !CompileBroker::should_compile_new_jobs())) {
_do_sweep = false;
return;
}
// safepoint.
// Make sure CompiledIC_lock in unlocked, since we might update some
// inline caches. If it is, we just bail-out and try later.
......@@ -176,7 +171,7 @@ void NMethodSweeper::scan_stacks() {
// Check for restart
assert(CodeCache::find_blob_unsafe(_current) == _current, "Sweeper nmethod cached state invalid");
if (_current == NULL) {
if (!sweep_in_progress() && _resweep) {
_seen = 0;
_invocations = NmethodSweepFraction;
_current = CodeCache::first_nmethod();
......@@ -187,39 +182,30 @@ void NMethodSweeper::scan_stacks() {
Threads::nmethods_do(&mark_activation_closure);
// reset the flags since we started a scan from the beginning.
_rescan = false;
_resweep = false;
_locked_seen = 0;
_not_entrant_seen_on_stack = 0;
}
if (UseCodeCacheFlushing) {
if (!CodeCache::needs_flushing()) {
// scan_stacks() runs during a safepoint, no race with setters
_advise_to_sweep = 0;
// only allow new flushes after the interval is complete.
jlong now = os::javaTimeMillis();
jlong max_interval = (jlong)MinCodeCacheFlushingInterval * (jlong)1000;
jlong curr_interval = now - _last_full_flush_time;
if (curr_interval > max_interval) {
_flush_token = 0;
}
if (was_full()) {
// There was some progress so attempt to restart the compiler
jlong now = os::javaTimeMillis();
jlong max_interval = (jlong)MinCodeCacheFlushingInterval * (jlong)1000;
jlong curr_interval = now - _last_was_full;
if ((!CodeCache::needs_flushing()) && (curr_interval > max_interval)) {
CompileBroker::set_should_compile_new_jobs(CompileBroker::run_compilation);
set_was_full(false);
// Update the _last_was_full time so we can tell how fast the
// code cache is filling up
_last_was_full = os::javaTimeMillis();
log_sweep("restart_compiler");
}
if (!CodeCache::needs_flushing() && !CompileBroker::should_compile_new_jobs()) {
CompileBroker::set_should_compile_new_jobs(CompileBroker::run_compilation);
log_sweep("restart_compiler");
}
}
}
void NMethodSweeper::possibly_sweep() {
assert(JavaThread::current()->thread_state() == _thread_in_vm, "must run in vm mode");
if ((!MethodFlushing) || (!_do_sweep)) return;
if (!MethodFlushing || !sweep_in_progress()) return;
if (_invocations > 0) {
// Only one thread at a time will sweep
......@@ -253,6 +239,14 @@ void NMethodSweeper::sweep_code_cache() {
tty->print_cr("### Sweep at %d out of %d. Invocations left: %d", _seen, CodeCache::nof_nmethods(), _invocations);
}
if (!CompileBroker::should_compile_new_jobs()) {
// If we have turned off compilations we might as well do full sweeps
// in order to reach the clean state faster. Otherwise the sleeping compiler
// threads will slow down sweeping. After a few iterations the cache
// will be clean and sweeping stops (_resweep will not be set)
_invocations = 1;
}
// We want to visit all nmethods after NmethodSweepFraction
// invocations so divide the remaining number of nmethods by the
// remaining number of invocations. This is only an estimate since
......@@ -296,7 +290,7 @@ void NMethodSweeper::sweep_code_cache() {
assert(_invocations > 1 || _current == NULL, "must have scanned the whole cache");
if (_current == NULL && !_rescan && (_locked_seen || _not_entrant_seen_on_stack)) {
if (!sweep_in_progress() && !_resweep && (_locked_seen || _not_entrant_seen_on_stack)) {
// we've completed a scan without making progress but there were
// nmethods we were unable to process either because they were
// locked or were still on stack. We don't have to aggresively
......@@ -318,6 +312,13 @@ void NMethodSweeper::sweep_code_cache() {
if (_invocations == 1) {
log_sweep("finished");
}
// Sweeper is the only case where memory is released,
// check here if it is time to restart the compiler.
if (UseCodeCacheFlushing && !CompileBroker::should_compile_new_jobs() && !CodeCache::needs_flushing()) {
CompileBroker::set_should_compile_new_jobs(CompileBroker::run_compilation);
log_sweep("restart_compiler");
}
}
class NMethodMarker: public StackObj {
......@@ -392,7 +393,7 @@ void NMethodSweeper::process_nmethod(nmethod *nm) {
tty->print_cr("### Nmethod %3d/" PTR_FORMAT " (zombie) being marked for reclamation", nm->compile_id(), nm);
}
nm->mark_for_reclamation();
_rescan = true;
_resweep = true;
SWEEP(nm);
}
} else if (nm->is_not_entrant()) {
......@@ -403,7 +404,7 @@ void NMethodSweeper::process_nmethod(nmethod *nm) {
tty->print_cr("### Nmethod %3d/" PTR_FORMAT " (not entrant) being made zombie", nm->compile_id(), nm);
}
nm->make_zombie();
_rescan = true;
_resweep = true;
SWEEP(nm);
} else {
// Still alive, clean up its inline caches
......@@ -425,16 +426,15 @@ void NMethodSweeper::process_nmethod(nmethod *nm) {
release_nmethod(nm);
} else {
nm->make_zombie();
_rescan = true;
_resweep = true;
SWEEP(nm);
}
} else {
assert(nm->is_alive(), "should be alive");
if (UseCodeCacheFlushing) {
if ((nm->method()->code() != nm) && !(nm->is_locked_by_vm()) && !(nm->is_osr_method()) &&
(_traversals > _was_full_traversal+2) && (((uint)nm->compile_id()) < _highest_marked) &&
CodeCache::needs_flushing()) {
if (nm->is_speculatively_disconnected() && !nm->is_locked_by_vm() && !nm->is_osr_method() &&
(_traversals > _last_flush_traversal_id + 2) && (nm->compile_id() < _highest_marked)) {
// This method has not been called since the forced cleanup happened
nm->make_not_entrant();
}
......@@ -457,41 +457,27 @@ void NMethodSweeper::process_nmethod(nmethod *nm) {
// _code field is restored and the Method*/nmethod
// go back to their normal state.
void NMethodSweeper::handle_full_code_cache(bool is_full) {
// Only the first one to notice can advise us to start early cleaning
if (!is_full){
jint old = Atomic::cmpxchg( 1, &_advise_to_sweep, 0 );
if (old != 0) {
return;
}
}
if (is_full) {
// Since code cache is full, immediately stop new compiles
bool did_set = CompileBroker::set_should_compile_new_jobs(CompileBroker::stop_compilation);
if (!did_set) {
// only the first to notice can start the cleaning,
// others will go back and block
return;
}
set_was_full(true);
// If we run out within MinCodeCacheFlushingInterval of the last unload time, give up
jlong now = os::javaTimeMillis();
jlong max_interval = (jlong)MinCodeCacheFlushingInterval * (jlong)1000;
jlong curr_interval = now - _last_was_full;
if (curr_interval < max_interval) {
_rescan = true;
log_sweep("disable_compiler", "flushing_interval='" UINT64_FORMAT "'",
curr_interval/1000);
return;
if (CompileBroker::set_should_compile_new_jobs(CompileBroker::stop_compilation)) {
log_sweep("disable_compiler");
}
}
// Make sure only one thread can flush
// The token is reset after CodeCacheMinimumFlushInterval in scan stacks,
// no need to check the timeout here.
jint old = Atomic::cmpxchg( 1, &_flush_token, 0 );
if (old != 0) {
return;
}
VM_HandleFullCodeCache op(is_full);
VMThread::execute(&op);
// rescan again as soon as possible
_rescan = true;
// resweep again as soon as possible
_resweep = true;
}
void NMethodSweeper::speculative_disconnect_nmethods(bool is_full) {
......@@ -500,62 +486,64 @@ void NMethodSweeper::speculative_disconnect_nmethods(bool is_full) {
debug_only(jlong start = os::javaTimeMillis();)
if ((!was_full()) && (is_full)) {
if (!CodeCache::needs_flushing()) {
log_sweep("restart_compiler");
CompileBroker::set_should_compile_new_jobs(CompileBroker::run_compilation);
return;
}
}
// Traverse the code cache trying to dump the oldest nmethods
uint curr_max_comp_id = CompileBroker::get_compilation_id();
uint flush_target = ((curr_max_comp_id - _highest_marked) >> 1) + _highest_marked;
int curr_max_comp_id = CompileBroker::get_compilation_id();
int flush_target = ((curr_max_comp_id - _dead_compile_ids) / CodeCacheFlushingFraction) + _dead_compile_ids;
log_sweep("start_cleaning");
nmethod* nm = CodeCache::alive_nmethod(CodeCache::first());
jint disconnected = 0;
jint made_not_entrant = 0;
jint nmethod_count = 0;
while ((nm != NULL)){
uint curr_comp_id = nm->compile_id();
int curr_comp_id = nm->compile_id();
// OSR methods cannot be flushed like this. Also, don't flush native methods
// since they are part of the JDK in most cases
if (nm->is_in_use() && (!nm->is_osr_method()) && (!nm->is_locked_by_vm()) &&
(!nm->is_native_method()) && ((curr_comp_id < flush_target))) {
if ((nm->method()->code() == nm)) {
// This method has not been previously considered for
// unloading or it was restored already
CodeCache::speculatively_disconnect(nm);
disconnected++;
} else if (nm->is_speculatively_disconnected()) {
// This method was previously considered for preemptive unloading and was not called since then
CompilationPolicy::policy()->delay_compilation(nm->method());
nm->make_not_entrant();
made_not_entrant++;
}
if (!nm->is_osr_method() && !nm->is_locked_by_vm() && !nm->is_native_method()) {
// only count methods that can be speculatively disconnected
nmethod_count++;
if (nm->is_in_use() && (curr_comp_id < flush_target)) {
if ((nm->method()->code() == nm)) {
// This method has not been previously considered for
// unloading or it was restored already
CodeCache::speculatively_disconnect(nm);
disconnected++;
} else if (nm->is_speculatively_disconnected()) {
// This method was previously considered for preemptive unloading and was not called since then
CompilationPolicy::policy()->delay_compilation(nm->method());
nm->make_not_entrant();
made_not_entrant++;
}
if (curr_comp_id > _highest_marked) {
_highest_marked = curr_comp_id;
if (curr_comp_id > _highest_marked) {
_highest_marked = curr_comp_id;
}
}
}
nm = CodeCache::alive_nmethod(CodeCache::next(nm));
}
// remember how many compile_ids wheren't seen last flush.
_dead_compile_ids = curr_max_comp_id - nmethod_count;
log_sweep("stop_cleaning",
"disconnected='" UINT32_FORMAT "' made_not_entrant='" UINT32_FORMAT "'",
disconnected, made_not_entrant);
// Shut off compiler. Sweeper will start over with a new stack scan and
// traversal cycle and turn it back on if it clears enough space.
if (was_full()) {
_last_was_full = os::javaTimeMillis();
CompileBroker::set_should_compile_new_jobs(CompileBroker::stop_compilation);
if (is_full) {
_last_full_flush_time = os::javaTimeMillis();
}
// After two more traversals the sweeper will get rid of unrestored nmethods
_was_full_traversal = _traversals;
_last_flush_traversal_id = _traversals;
_resweep = true;
#ifdef ASSERT
jlong end = os::javaTimeMillis();
if(PrintMethodFlushing && Verbose) {
......
......@@ -35,26 +35,29 @@ class NMethodSweeper : public AllStatic {
static nmethod* _current; // Current nmethod
static int _seen; // Nof. nmethod we have currently processed in current pass of CodeCache
static volatile int _invocations; // No. of invocations left until we are completed with this pass
static volatile int _sweep_started; // Flag to control conc sweeper
static volatile int _invocations; // No. of invocations left until we are completed with this pass
static volatile int _sweep_started; // Flag to control conc sweeper
static bool _rescan; // Indicates that we should do a full rescan of the
// of the code cache looking for work to do.
static bool _do_sweep; // Flag to skip the conc sweep if no stack scan happened
static int _locked_seen; // Number of locked nmethods encountered during the scan
//The following are reset in scan_stacks and synchronized by the safepoint
static bool _resweep; // Indicates that a change has happend and we want another sweep,
// always checked and reset at a safepoint so memory will be in sync.
static int _locked_seen; // Number of locked nmethods encountered during the scan
static int _not_entrant_seen_on_stack; // Number of not entrant nmethod were are still on stack
static jint _flush_token; // token that guards method flushing, making sure it is executed only once.
static bool _was_full; // remember if we did emergency unloading
static jint _advise_to_sweep; // flag to indicate code cache getting full
static jlong _last_was_full; // timestamp of last emergency unloading
static uint _highest_marked; // highest compile id dumped at last emergency unloading
static long _was_full_traversal; // trav number at last emergency unloading
// These are set during a flush, a VM-operation
static long _last_flush_traversal_id; // trav number at last flush unloading
static jlong _last_full_flush_time; // timestamp of last emergency unloading
static void process_nmethod(nmethod *nm);
// These are synchronized by the _sweep_started token
static int _highest_marked; // highest compile id dumped at last emergency unloading
static int _dead_compile_ids; // number of compile ids that where not in the cache last flush
static void process_nmethod(nmethod *nm);
static void release_nmethod(nmethod* nm);
static void log_sweep(const char* msg, const char* format = NULL, ...);
static bool sweep_in_progress();
public:
static long traversal_count() { return _traversals; }
......@@ -71,17 +74,14 @@ class NMethodSweeper : public AllStatic {
static void possibly_sweep(); // Compiler threads call this to sweep
static void notify(nmethod* nm) {
// Perform a full scan of the code cache from the beginning. No
// Request a new sweep of the code cache from the beginning. No
// need to synchronize the setting of this flag since it only
// changes to false at safepoint so we can never overwrite it with false.
_rescan = true;
_resweep = true;
}
static void handle_full_code_cache(bool is_full); // Called by compilers who fail to allocate
static void speculative_disconnect_nmethods(bool was_full); // Called by vm op to deal with alloc failure
static void set_was_full(bool state) { _was_full = state; }
static bool was_full() { return _was_full; }
};
#endif // SHARE_VM_RUNTIME_SWEEPER_HPP
......@@ -828,6 +828,7 @@ typedef BinaryTreeDictionary<Metablock, FreeList> MetablockTreeDictionary;
nonstatic_field(nmethod, _lock_count, jint) \
nonstatic_field(nmethod, _stack_traversal_mark, long) \
nonstatic_field(nmethod, _compile_id, int) \
nonstatic_field(nmethod, _comp_level, int) \
nonstatic_field(nmethod, _exception_cache, ExceptionCache*) \
nonstatic_field(nmethod, _marked_for_deoptimization, bool) \
\
......
......@@ -196,7 +196,7 @@ class fileStream : public outputStream {
fileStream() { _file = NULL; _need_close = false; }
fileStream(const char* file_name);
fileStream(const char* file_name, const char* opentype);
fileStream(FILE* file) { _file = file; _need_close = false; }
fileStream(FILE* file, bool need_close = false) { _file = file; _need_close = need_close; }
~fileStream();
bool is_open() const { return _file != NULL; }
void set_need_close(bool b) { _need_close = b;}
......
......@@ -799,6 +799,56 @@ void VMError::report(outputStream* st) {
VMError* volatile VMError::first_error = NULL;
volatile jlong VMError::first_error_tid = -1;
/** Expand a pattern into a buffer starting at pos and open a file using constructed path */
static int expand_and_open(const char* pattern, char* buf, size_t buflen, size_t pos) {
int fd = -1;
if (Arguments::copy_expand_pid(pattern, strlen(pattern), &buf[pos], buflen - pos)) {
fd = open(buf, O_RDWR | O_CREAT | O_TRUNC, 0666);
}
return fd;
}
/**
* Construct file name for a log file and return it's file descriptor.
* Name and location depends on pattern, default_pattern params and access
* permissions.
*/
static int prepare_log_file(const char* pattern, const char* default_pattern, char* buf, size_t buflen) {
int fd = -1;
// If possible, use specified pattern to construct log file name
if (pattern != NULL) {
fd = expand_and_open(pattern, buf, buflen, 0);
}
// Either user didn't specify, or the user's location failed,
// so use the default name in the current directory
if (fd == -1) {
const char* cwd = os::get_current_directory(buf, buflen);
if (cwd != NULL) {
size_t pos = strlen(cwd);
int fsep_len = jio_snprintf(&buf[pos], buflen-pos, "%s", os::file_separator());
pos += fsep_len;
if (fsep_len > 0) {
fd = expand_and_open(default_pattern, buf, buflen, pos);
}
}
}
// try temp directory if it exists.
if (fd == -1) {
const char* tmpdir = os::get_temp_directory();
if (tmpdir != NULL && strlen(tmpdir) > 0) {
int pos = jio_snprintf(buf, buflen, "%s%s", tmpdir, os::file_separator());
if (pos > 0) {
fd = expand_and_open(default_pattern, buf, buflen, pos);
}
}
}
return fd;
}
void VMError::report_and_die() {
// Don't allocate large buffer on stack
static char buffer[O_BUFLEN];
......@@ -908,36 +958,7 @@ void VMError::report_and_die() {
// see if log file is already open
if (!log.is_open()) {
// open log file
int fd = -1;
if (ErrorFile != NULL) {
bool copy_ok =
Arguments::copy_expand_pid(ErrorFile, strlen(ErrorFile), buffer, sizeof(buffer));
if (copy_ok) {
fd = open(buffer, O_RDWR | O_CREAT | O_TRUNC, 0666);
}
}
if (fd == -1) {
const char *cwd = os::get_current_directory(buffer, sizeof(buffer));
size_t len = strlen(cwd);
// either user didn't specify, or the user's location failed,
// so use the default name in the current directory
jio_snprintf(&buffer[len], sizeof(buffer)-len, "%shs_err_pid%u.log",
os::file_separator(), os::current_process_id());
fd = open(buffer, O_RDWR | O_CREAT | O_TRUNC, 0666);
}
if (fd == -1) {
const char * tmpdir = os::get_temp_directory();
// try temp directory if it exists.
if (tmpdir != NULL && tmpdir[0] != '\0') {
jio_snprintf(buffer, sizeof(buffer), "%s%shs_err_pid%u.log",
tmpdir, os::file_separator(), os::current_process_id());
fd = open(buffer, O_RDWR | O_CREAT | O_TRUNC, 0666);
}
}
int fd = prepare_log_file(ErrorFile, "hs_err_pid%p.log", buffer, sizeof(buffer));
if (fd != -1) {
out.print_raw("# An error report file with more information is saved as:\n# ");
out.print_raw_cr(buffer);
......@@ -961,7 +982,7 @@ void VMError::report_and_die() {
// Run error reporting to determine whether or not to report the crash.
if (!transmit_report_done && should_report_bug(first_error->_id)) {
transmit_report_done = true;
FILE* hs_err = ::fdopen(log.fd(), "r");
FILE* hs_err = os::open(log.fd(), "r");
if (NULL != hs_err) {
ErrorReporter er;
er.call(hs_err, buffer, O_BUFLEN);
......@@ -1011,7 +1032,19 @@ void VMError::report_and_die() {
skip_replay = true;
ciEnv* env = ciEnv::current();
if (env != NULL) {
env->dump_replay_data();
int fd = prepare_log_file(ReplayDataFile, "replay_pid%p.log", buffer, sizeof(buffer));
if (fd != -1) {
FILE* replay_data_file = os::open(fd, "w");
if (replay_data_file != NULL) {
fileStream replay_data_stream(replay_data_file, /*need_close=*/true);
env->dump_replay_data(&replay_data_stream);
out.print_raw("#\n# Compiler replay data is saved as:\n# ");
out.print_raw_cr(buffer);
} else {
out.print_raw("#\n# Can't open file to dump replay data. Error: ");
out.print_raw_cr(strerror(os::get_last_error()));
}
}
}
}
......
#!/bin/sh
#
# Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 only, as
# published by the Free Software Foundation.
#
# This code is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# version 2 for more details (a copy is included in the LICENSE file that
# accompanied this code).
#
# You should have received a copy of the GNU General Public License version
# 2 along with this work; if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
# or visit www.oracle.com if you need additional information or have any
# questions.
#
#
##
## @test
## @bug 8011675
## @summary testing of ciReplay with using generated by SA replay.txt
## @author igor.ignatyev@oracle.com
## @run shell TestSA.sh
##
if [ "${TESTSRC}" = "" ]
then
TESTSRC=${PWD}
echo "TESTSRC not set. Using "${TESTSRC}" as default"
fi
echo "TESTSRC=${TESTSRC}"
## Adding common setup Variables for running shell tests.
. ${TESTSRC}/../../test_env.sh
. ${TESTSRC}/common.sh
generate_replay
${MV} ${replay_data} replay_vm.txt
if [ -z "${core_file}" -o ! -r "${core_file}" ]
then
# skip test if MacOS host isn't configured for core dumping
if [ "$OS" = "Darwin" ]
then
if [ ! -d "/cores" ]
then
echo TEST SKIPPED: \'/cores\' dir doens\'t exist
exit 0
fi
if [ ! -w "/cores" ]
then
echo TEST SKIPPED: \'/cores\' dir exists but is not writable
exit 0
fi
fi
test_fail 2 "CHECK :: CORE GENERATION" "core wasn't generated on $OS"
fi
echo "dumpreplaydata -a > ${replay_data}" | \
${JAVA} ${TESTVMOPTS} \
-cp ${TESTJAVA}${FS}lib${FS}sa-jdi.jar \
sun.jvm.hotspot.CLHSDB ${JAVA} ${core_file}
if [ ! -s ${replay_data} ]
then
test_fail 1 "CHECK :: REPLAY DATA GENERATION" \
"replay data wasn't generated by SA"
fi
diff --brief ${replay_data} replay_vm.txt
if [ $? -ne 0 ]
then
echo WARNING: replay.txt from SA != replay.txt from VM
fi
common_tests 10
${VM_TYPE}_tests 20
cleanup
echo TEST PASSED
#!/bin/sh
#
# Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 only, as
# published by the Free Software Foundation.
#
# This code is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# version 2 for more details (a copy is included in the LICENSE file that
# accompanied this code).
#
# You should have received a copy of the GNU General Public License version
# 2 along with this work; if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
# or visit www.oracle.com if you need additional information or have any
# questions.
#
#
##
## @test
## @bug 8011675
## @summary testing of ciReplay with using generated by VM replay.txt
## @author igor.ignatyev@oracle.com
## @run shell TestVM.sh
##
if [ "${TESTSRC}" = "" ]
then
TESTSRC=${PWD}
echo "TESTSRC not set. Using "${TESTSRC}" as default"
fi
echo "TESTSRC=${TESTSRC}"
## Adding common setup Variables for running shell tests.
. ${TESTSRC}/../../test_env.sh
. ${TESTSRC}/common.sh
generate_replay
if [ ! -s ${replay_data} ]
then
test_fail 1 "CHECK :: REPLAY DATA GENERATION" \
"replay data wasn't generated by VM"
fi
common_tests 10
${VM_TYPE}_tests 20
cleanup
if [ $is_tiered -eq 1 ]
then
stop_level=1
while [ $stop_level -le $server_level ]
do
generate_replay "-XX:TieredStopAtLevel=$stop_level"
if [ ! -s ${replay_data} ]
then
test_fail `expr $stop_level + 30` \
"TIERED LEVEL $stop_level :: REPLAY DATA GENERATION" \
"replay data wasn't generated by VM with stop_level=$stop_level"
fi
level=`grep "^compile " $replay_data | awk '{print $6}'`
if [ $level -gt $stop_level ]
then
test_fail `expr $stop_level + 40` \
"TIERED LEVEL $stop_level :: COMP_LEVEL VERIFICATION" \
"comp_level in replay[$level] is greater than stop_level[$stop_level]"
fi
positive_test `expr $stop_level + 50` "TIERED LEVEL $stop_level :: REPLAY" \
"-XX:TieredStopAtLevel=$stop_level"
stop_level=`expr $stop_level + 1`
done
cleanup
fi
echo TEST PASSED
#!/bin/sh
#
# Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 only, as
# published by the Free Software Foundation.
#
# This code is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# version 2 for more details (a copy is included in the LICENSE file that
# accompanied this code).
#
# You should have received a copy of the GNU General Public License version
# 2 along with this work; if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
# or visit www.oracle.com if you need additional information or have any
# questions.
#
#
##
## @test
## @bug 8011675
## @summary testing of ciReplay with using generated by VM replay.txt w/o comp_level
## @author igor.ignatyev@oracle.com
## @run shell TestVM_no_comp_level.sh
##
if [ "${TESTSRC}" = "" ]
then
TESTSRC=${PWD}
echo "TESTSRC not set. Using "${TESTSRC}" as default"
fi
echo "TESTSRC=${TESTSRC}"
## Adding common setup Variables for running shell tests.
. ${TESTSRC}/../../test_env.sh
. ${TESTSRC}/common.sh
generate_replay
if [ ! -s ${replay_data} ]
then
test_fail 1 "CHECK :: REPLAY DATA GENERATION" \
"replay data wasn't generated by VM"
fi
${CP} ${replay_data} replay_vm.txt
sed 's/^\(compile *[^ ][^ ]* *[^ ][^ ]* [^ ][^ ]* [^ ][^ ]*\).*$/\1/' \
replay_vm.txt > ${replay_data}
if [ $client_available -eq 1 ]
then
# tiered is unavailable in client vm, so results w/ flags will be the same as w/o flags
negative_test 10 "CLIENT" -client
fi
if [ $server_available -eq 1 ]
then
positive_test 21 "SERVER :: NON-TIERED" -XX:-TieredCompilation -server
positive_test 22 "SERVER :: TIERED" -XX:+TieredCompilation -server
fi
cleanup
echo TEST PASSED
#!/bin/sh
#
# Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 only, as
# published by the Free Software Foundation.
#
# This code is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# version 2 for more details (a copy is included in the LICENSE file that
# accompanied this code).
#
# You should have received a copy of the GNU General Public License version
# 2 along with this work; if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
# or visit www.oracle.com if you need additional information or have any
# questions.
#
#
# $1 - error code
# $2 - test name
# $3,.. - decription
test_fail() {
error=$1
shift
name=$1
shift
echo "TEST [$name] FAILED:"
echo "$@"
exit $error
}
# $@ - additional vm opts
start_test() {
# disable core dump on *nix
ulimit -S -c 0
# disable core dump on windows
VMOPTS="$@ -XX:-CreateMinidumpOnCrash"
cmd="${JAVA} ${VMOPTS} -XX:+ReplayCompiles -XX:ReplayDataFile=${replay_data}"
echo $cmd
$cmd
return $?
}
# $1 - error_code
# $2 - test name
# $3,.. - additional vm opts
positive_test() {
error=$1
shift
name=$1
shift
VMOPTS="${TESTVMOPTS} $@"
echo "POSITIVE TEST [$name]"
start_test ${VMOPTS}
exit_code=$?
if [ ${exit_code} -ne 0 ]
then
test_fail $error "$name" "exit_code[${exit_code}] != 0 during replay "\
"w/ vmopts: ${VMOPTS}"
fi
}
# $1 - error_code
# $2 - test name
# $2,.. - additional vm opts
negative_test() {
error=$1
shift
name=$1
shift
VMOPTS="${TESTVMOPTS} $@"
echo "NEGATIVE TEST [$name]"
start_test ${VMOPTS}
exit_code=$?
if [ ${exit_code} -eq 0 ]
then
test_fail $error "$name" "exit_code[${exit_code}] == 0 during replay "\
"w/ vmopts: ${VMOPTS}"
fi
}
# $1 - initial error_code
common_tests() {
positive_test $1 "COMMON :: THE SAME FLAGS"
positive_test `expr $1 + 1` "COMMON :: TIERED" -XX:+TieredCompilation
}
# $1 - initial error_code
# $2 - non-tiered comp_level
nontiered_tests() {
level=`grep "^compile " $replay_data | awk '{print $6}'`
# is level available in non-tiere
if [ "$level" -eq $2 ]
then
positive_test $1 "NON-TIERED :: AVAILABLE COMP_LEVEL" \
-XX:-TieredCompilation
else
negative_test `expr $1 + 1` "NON-TIERED :: UNAVAILABLE COMP_LEVEL" \
negative_test `expr $1 + 1` "NON-TIERED :: UNAVAILABLE COMP_LEVEL" \
-XX:-TieredCompilation
fi
}
# $1 - initial error_code
client_tests() {
# testing in opposite VM
if [ $server_available -eq 1 ]
then
negative_test $1 "SERVER :: NON-TIERED" -XX:-TieredCompilation \
-server
positive_test `expr $1 + 1` "SERVER :: TIERED" -XX:+TieredCompilation \
-server
fi
nontiered_tests `expr $1 + 2` $client_level
}
# $1 - initial error_code
server_tests() {
# testing in opposite VM
if [ $client_available -eq 1 ]
then
# tiered is unavailable in client vm, so results w/ flags will be the same as w/o flags
negative_test $1 "CLIENT" -client
fi
nontiered_tests `expr $1 + 2` $server_level
}
cleanup() {
${RM} -f core*
${RM} -f replay*.txt
${RM} -f hs_err_pid*.log
${RM} -f test_core
${RM} -f test_replay.txt
}
JAVA=${TESTJAVA}${FS}bin${FS}java
replay_data=test_replay.txt
${JAVA} ${TESTVMOPTS} -Xinternalversion 2>&1 | grep debug
# Only test fastdebug
if [ $? -ne 0 ]
then
echo TEST SKIPPED: product build
exit 0
fi
is_int=`${JAVA} ${TESTVMOPTS} -version 2>&1 | grep -c "interpreted mode"`
# Not applicable for Xint
if [ $is_int -ne 0 ]
then
echo TEST SKIPPED: interpreted mode
exit 0
fi
cleanup
client_available=`${JAVA} ${TESTVMOPTS} -client -Xinternalversion 2>&1 | \
grep -c Client`
server_available=`${JAVA} ${TESTVMOPTS} -server -Xinternalversion 2>&1 | \
grep -c Server`
is_tiered=`${JAVA} ${TESTVMOPTS} -XX:+PrintFlagsFinal -version | \
grep TieredCompilation | \
grep -c true`
# CompLevel_simple -- C1
client_level=1
# CompLevel_full_optimization -- C2 or Shark
server_level=4
echo "client_available=$client_available"
echo "server_available=$server_available"
echo "is_tiered=$is_tiered"
# crash vm in compiler thread with generation replay data and 'small' dump-file
# $@ - additional vm opts
generate_replay() {
# enable core dump
ulimit -c unlimited
cmd="${JAVA} ${TESTVMOPTS} $@ \
-Xms8m \
-Xmx32m \
-XX:MetaspaceSize=4m \
-XX:MaxMetaspaceSize=16m \
-XX:InitialCodeCacheSize=512k \
-XX:ReservedCodeCacheSize=4m \
-XX:ThreadStackSize=512 \
-XX:VMThreadStackSize=512 \
-XX:CompilerThreadStackSize=512 \
-XX:ParallelGCThreads=1 \
-XX:CICompilerCount=1 \
-Xcomp \
-XX:CICrashAt=1 \
-XX:+CreateMinidumpOnCrash \
-XX:+DumpReplayDataOnError \
-XX:ReplayDataFile=${replay_data} \
-version"
echo GENERATION OF REPLAY.TXT:
echo $cmd
${cmd} 2>&1 > crash.out
core_locations=`grep -i core crash.out | grep "location:" | \
sed -e 's/.*location: //'`
rm crash.out
# processing core locations for *nix
if [ $OS != "windows" ]
then
# remove 'or' between '/core.<pid>' and 'core'
core_locations=`echo $core_locations | \
sed -e 's/\([^ ]*\) or \([^ ]*\)/\1 \2/'`
# add <core_path>/core.<pid> core.<pid>
core=`echo $core_locations | awk '{print $1}'`
dir=`dirname $core`
core=`basename $core`
if [ -n ${core} ]
then
core_locations="$core_locations $dir${FS}$core"
fi
core=`echo $core_locations | awk '{print $2}'`
if [ -n ${core} ]
then
core_locations="$core_locations $dir${FS}$core"
fi
fi
echo "LOOKING FOR CORE IN ${core_locations}"
for core in $core_locations
do
if [ -r "$core" ]
then
core_file=$core
fi
done
# core-file was found
if [ -n "$core_file" ]
then
${MV} "${core_file}" test_core
core_file=test_core
fi
${RM} -f hs_err_pid*.log
}
......@@ -42,6 +42,11 @@ public abstract class CompilerWhiteBoxTest {
protected static int COMP_LEVEL_NONE = 0;
/** {@code CompLevel::CompLevel_any}, {@code CompLevel::CompLevel_all} */
protected static int COMP_LEVEL_ANY = -1;
/** {@code CompLevel::CompLevel_simple} -- C1 */
protected static int COMP_LEVEL_SIMPLE = 1;
/** {@code CompLevel::CompLevel_full_optimization} -- C2 or Shark */
protected static int COMP_LEVEL_FULL_OPTIMIZATION = 4;
/** Instance of WhiteBox */
protected static final WhiteBox WHITE_BOX = WhiteBox.getWhiteBox();
/** Value of {@code -XX:CompileThreshold} */
......@@ -91,6 +96,17 @@ public abstract class CompilerWhiteBoxTest {
return result == null ? defaultValue : result;
}
/** copy of is_c1_compile(int) from utilities/globalDefinitions.hpp */
protected static boolean isC1Compile(int compLevel) {
return (compLevel > COMP_LEVEL_NONE)
&& (compLevel < COMP_LEVEL_FULL_OPTIMIZATION);
}
/** copy of is_c2_compile(int) from utilities/globalDefinitions.hpp */
protected static boolean isC2Compile(int compLevel) {
return compLevel == COMP_LEVEL_FULL_OPTIMIZATION;
}
/** tested method */
protected final Executable method;
private final Callable<Integer> callable;
......
......@@ -23,6 +23,7 @@
/*
* @test MakeMethodNotCompilableTest
* @bug 8012322
* @library /testlibrary /testlibrary/whitebox
* @build MakeMethodNotCompilableTest
* @run main ClassFileInstaller sun.hotspot.WhiteBox
......@@ -67,28 +68,69 @@ public class MakeMethodNotCompilableTest extends CompilerWhiteBoxTest {
}
if (TIERED_COMPILATION) {
for (int i = 1, n = TIERED_STOP_AT_LEVEL + 1; i < n; ++i) {
WHITE_BOX.makeMethodNotCompilable(method, i);
if (WHITE_BOX.isMethodCompilable(method, i)) {
final int tierLimit = TIERED_STOP_AT_LEVEL + 1;
for (int testedTier = 1; testedTier < tierLimit; ++testedTier) {
testTier(testedTier);
}
for (int testedTier = 1; testedTier < tierLimit; ++testedTier) {
WHITE_BOX.makeMethodNotCompilable(method, testedTier);
if (WHITE_BOX.isMethodCompilable(method, testedTier)) {
throw new RuntimeException(method
+ " must be not compilable at level" + i);
+ " must be not compilable at level" + testedTier);
}
WHITE_BOX.enqueueMethodForCompilation(method, i);
WHITE_BOX.enqueueMethodForCompilation(method, testedTier);
checkNotCompiled();
if (!WHITE_BOX.isMethodCompilable(method)) {
System.out.println(method
+ " is not compilable after level " + i);
+ " is not compilable after level " + testedTier);
}
}
// WB.clearMethodState() must reset no-compilable flags
WHITE_BOX.clearMethodState(method);
if (!WHITE_BOX.isMethodCompilable(method)) {
} else {
compile();
checkCompiled();
int compLevel = WHITE_BOX.getMethodCompilationLevel(method);
WHITE_BOX.deoptimizeMethod(method);
WHITE_BOX.makeMethodNotCompilable(method, compLevel);
if (WHITE_BOX.isMethodCompilable(method, COMP_LEVEL_ANY)) {
throw new RuntimeException(method
+ " is not compilable after clearMethodState()");
+ " must be not compilable at CompLevel::CompLevel_any,"
+ " after it is not compilable at " + compLevel);
}
WHITE_BOX.clearMethodState(method);
// nocompilable at opposite level must make no sense
int oppositeLevel;
if (isC1Compile(compLevel)) {
oppositeLevel = COMP_LEVEL_FULL_OPTIMIZATION;
} else {
oppositeLevel = COMP_LEVEL_SIMPLE;
}
WHITE_BOX.makeMethodNotCompilable(method, oppositeLevel);
if (!WHITE_BOX.isMethodCompilable(method, COMP_LEVEL_ANY)) {
throw new RuntimeException(method
+ " must be compilable at CompLevel::CompLevel_any,"
+ " even it is not compilable at opposite level ["
+ compLevel + "]");
}
if (!WHITE_BOX.isMethodCompilable(method, compLevel)) {
throw new RuntimeException(method
+ " must be compilable at level " + compLevel
+ ", even it is not compilable at opposite level ["
+ compLevel + "]");
}
}
// clearing after tiered/non-tiered tests
// WB.clearMethodState() must reset no-compilable flags
WHITE_BOX.clearMethodState(method);
if (!WHITE_BOX.isMethodCompilable(method)) {
throw new RuntimeException(method
+ " is not compilable after clearMethodState()");
}
WHITE_BOX.makeMethodNotCompilable(method);
if (WHITE_BOX.isMethodCompilable(method)) {
throw new RuntimeException(method + " must be not compilable");
......@@ -108,4 +150,65 @@ public class MakeMethodNotCompilableTest extends CompilerWhiteBoxTest {
compile();
checkCompiled();
}
// separately tests each tier
private void testTier(int testedTier) {
if (!WHITE_BOX.isMethodCompilable(method, testedTier)) {
throw new RuntimeException(method
+ " is not compilable on start");
}
WHITE_BOX.makeMethodNotCompilable(method, testedTier);
// tests for all other tiers
for (int anotherTier = 1, tierLimit = TIERED_STOP_AT_LEVEL + 1;
anotherTier < tierLimit; ++anotherTier) {
boolean isCompilable = WHITE_BOX.isMethodCompilable(method,
anotherTier);
if (sameCompile(testedTier, anotherTier)) {
if (isCompilable) {
throw new RuntimeException(method
+ " must be not compilable at level " + anotherTier
+ ", if it is not compilable at " + testedTier);
}
WHITE_BOX.enqueueMethodForCompilation(method, anotherTier);
checkNotCompiled();
} else {
if (!isCompilable) {
throw new RuntimeException(method
+ " must be compilable at level " + anotherTier
+ ", even if it is not compilable at "
+ testedTier);
}
WHITE_BOX.enqueueMethodForCompilation(method, anotherTier);
checkCompiled();
WHITE_BOX.deoptimizeMethod(method);
}
if (!WHITE_BOX.isMethodCompilable(method, COMP_LEVEL_ANY)) {
throw new RuntimeException(method
+ " must be compilable at 'CompLevel::CompLevel_any'"
+ ", if it is not compilable only at " + testedTier);
}
}
// clear state after test
WHITE_BOX.clearMethodState(method);
if (!WHITE_BOX.isMethodCompilable(method, testedTier)) {
throw new RuntimeException(method
+ " is not compilable after clearMethodState()");
}
}
private boolean sameCompile(int level1, int level2) {
if (level1 == level2) {
return true;
}
if (isC1Compile(level1) && isC1Compile(level2)) {
return true;
}
if (isC2Compile(level1) && isC2Compile(level2)) {
return true;
}
return false;
}
}
/*
* Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
/*
* @test WhiteBox
* @bug 8011675
* @summary verify that whitebox can be used even if not all functions are declared in java-part
* @author igor.ignatyev@oracle.com
* @library /testlibrary
* @compile WhiteBox.java
* @run main ClassFileInstaller sun.hotspot.WhiteBox
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions -XX:+WhiteBoxAPI sun.hotspot.WhiteBox
* @clean sun.hotspot.WhiteBox
*/
package sun.hotspot;
public class WhiteBox {
private static native void registerNatives();
static { registerNatives(); }
public native int notExistedMethod();
public native int getHeapOopSize();
public static void main(String[] args) {
WhiteBox wb = new WhiteBox();
if (wb.getHeapOopSize() < 0) {
throw new Error("wb.getHeapOopSize() < 0");
}
boolean catched = false;
try {
wb.notExistedMethod();
} catch (UnsatisfiedLinkError e) {
catched = true;
}
if (!catched) {
throw new Error("wb.notExistedMethod() was invoked");
}
}
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册