未验证 提交 feed66df 编写于 作者: B Bruce Forstall 提交者: GitHub

Update SuperPMI CI automation (#60375)

* Update SuperPMI CI automation

A number of changes:
1. Move the SuperPMI replay pipeline to the public instance, so it can be
triggered by GitHub PRs.
2. Rename SuperPMI collection scripts to contain "collect" in their names,
to distinguish them from the "replay" scripts.
3. Remove a lot of unused copy/paste cruft
4. Create a new azdo_pipelines_util.py script for all the CI scripts to depend
on, so they don't import the superpmi.py script, or each other.
5. Some changes to simplify the Python imports and be more consistent in imported
API usage.

* Fix python

* Fix python names

* For testing, upload spmi collection to "test_collect" location

Don't overwrite the existing collection
上级 56d807d4
......@@ -58,7 +58,7 @@ jobs:
- template: /eng/pipelines/common/platform-matrix.yml
parameters:
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-job.yml
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-collect-job.yml
buildConfig: checked
platforms:
# Linux tests are built on the OSX machines.
......@@ -79,7 +79,7 @@ jobs:
- template: /eng/pipelines/common/platform-matrix.yml
parameters:
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-job.yml
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-collect-job.yml
buildConfig: checked
platforms:
# Linux tests are built on the OSX machines.
......@@ -101,7 +101,7 @@ jobs:
- template: /eng/pipelines/common/platform-matrix.yml
parameters:
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-job.yml
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-collect-job.yml
buildConfig: checked
platforms:
# Linux tests are built on the OSX machines.
......@@ -123,7 +123,7 @@ jobs:
- template: /eng/pipelines/common/platform-matrix.yml
parameters:
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-job.yml
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-collect-job.yml
buildConfig: checked
platforms:
# Linux tests are built on the OSX machines.
......@@ -144,7 +144,7 @@ jobs:
- template: /eng/pipelines/common/platform-matrix.yml
parameters:
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-job.yml
jobTemplate: /eng/pipelines/coreclr/templates/superpmi-collect-job.yml
buildConfig: checked
platforms:
# Linux tests are built on the OSX machines.
......
......@@ -8,10 +8,6 @@ trigger:
- src/coreclr/jit/*
- src/coreclr/inc/jiteeversionguid.h
# This pipeline is supposed to be run only on merged changes
# and should not be triggerable from a PR.
pr: none
jobs:
- template: /eng/pipelines/common/platform-matrix.yml
......
......@@ -3,7 +3,6 @@ parameters:
archType: ''
osGroup: ''
osSubgroup: ''
container: ''
runtimeVariant: ''
testGroup: ''
framework: net6.0 # Specify the appropriate framework when running release branches (ie netcoreapp3.0 for release/3.0)
......@@ -13,8 +12,6 @@ parameters:
runtimeType: 'coreclr'
pool: ''
codeGenType: 'JIT'
projetFile: ''
runKind: ''
runJobTemplate: '/eng/pipelines/coreclr/templates/jit-run-exploratory-job.yml'
additionalSetupParameters: ''
......@@ -27,8 +24,8 @@ jobs:
- template: ${{ parameters.runJobTemplate }}
parameters:
# Compute job name from template parameters
jobName: ${{ format('exploratory_{0}{1}_{2}_{3}_{4}_{5}_{6}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.runtimeType, parameters.codeGenType, parameters.runKind) }}
displayName: ${{ format('Exploratory {0}{1} {2} {3} {4} {5} {6}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.runtimeType, parameters.codeGenType, parameters.runKind) }}
jobName: ${{ format('exploratory_{0}{1}_{2}_{3}_{4}_{5}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.runtimeType, parameters.codeGenType) }}
displayName: ${{ format('Exploratory {0}{1} {2} {3} {4} {5}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.runtimeType, parameters.codeGenType) }}
pool: ${{ parameters.pool }}
buildConfig: ${{ parameters.buildConfig }}
archType: ${{ parameters.archType }}
......@@ -38,7 +35,6 @@ jobs:
liveLibrariesBuildConfig: ${{ parameters.liveLibrariesBuildConfig }}
runtimeType: ${{ parameters.runtimeType }}
codeGenType: ${{ parameters.codeGenType }}
runKind: ${{ parameters.runKind }}
testGroup: ${{ parameters.testGroup }}
helixQueues: ${{ parameters.helixQueues }}
additionalSetupParameters: ${{ parameters.additionalSetupParameters }}
......
......@@ -9,8 +9,6 @@ parameters:
archType: '' # required -- targeting CPU architecture
osGroup: '' # required -- operating system for the job
osSubgroup: '' # optional -- operating system subgroup
extraSetupParameters: '' # optional -- extra arguments to pass to the setup script
frameworks: ['net6.0'] # optional -- list of frameworks to run against
continueOnError: 'false' # optional -- determines whether to continue the build if the step errors
dependsOn: '' # optional -- dependencies of the job
timeoutInMinutes: 320 # optional -- timeout for the job
......@@ -18,8 +16,7 @@ parameters:
liveLibrariesBuildConfig: '' # optional -- live-live libraries configuration to use for the run
runtimeType: 'coreclr' # optional -- Sets the runtime as coreclr or mono
codeGenType: 'JIT' # optional -- Decides on the codegen technology if running on mono
runKind: '' # required -- test category
helixQueues: '' # required -- Helix queue
helixQueues: '' # required -- Helix queues
dependOnEvaluatePaths: false
jobs:
......@@ -87,16 +84,12 @@ jobs:
value: '$(Build.SourcesDirectory)/artifacts/issues_summary/'
- name: AntigenLogsLocation
value: '$(Build.SourcesDirectory)/artifacts/antigen_logs/'
workspace:
clean: all
pool:
${{ parameters.pool }}
container: ${{ parameters.container }}
strategy:
matrix:
${{ each framework in parameters.frameworks }}:
${{ framework }}:
_Framework: ${{ framework }}
steps:
- ${{ parameters.steps }}
......
......@@ -9,16 +9,11 @@ parameters:
archType: '' # required -- targeting CPU architecture
osGroup: '' # required -- operating system for the job
osSubgroup: '' # optional -- operating system subgroup
extraSetupParameters: '' # optional -- extra arguments to pass to the setup script
frameworks: ['netcoreapp3.0'] # optional -- list of frameworks to run against
continueOnError: 'false' # optional -- determines whether to continue the build if the step errors
dependsOn: '' # optional -- dependencies of the job
timeoutInMinutes: 320 # optional -- timeout for the job
enableTelemetry: false # optional -- enable for telemetry
liveLibrariesBuildConfig: '' # optional -- live-live libraries configuration to use for the run
runtimeType: 'coreclr' # optional -- Sets the runtime as coreclr or mono
codeGenType: 'JIT' # optional -- Decides on the codegen technology if running on mono
runKind: '' # required -- test category
collectionType: ''
collectionName: ''
dependOnEvaluatePaths: false
......@@ -108,15 +103,10 @@ jobs:
pool:
${{ parameters.pool }}
container: ${{ parameters.container }}
strategy:
matrix:
${{ each framework in parameters.frameworks }}:
${{ framework }}:
_Framework: ${{ framework }}
steps:
- ${{ parameters.steps }}
- script: $(PythonScript) $(Build.SourcesDirectory)/src/coreclr/scripts/superpmi_setup.py -source_directory $(Build.SourcesDirectory) -core_root_directory $(Core_Root_Dir) -arch $(archType) -mch_file_tag $(MchFileTag) -input_directory $(InputDirectory) -collection_name $(CollectionName) -collection_type $(CollectionType) -max_size 50 # size in MB
- script: $(PythonScript) $(Build.SourcesDirectory)/src/coreclr/scripts/superpmi_collect_setup.py -source_directory $(Build.SourcesDirectory) -core_root_directory $(Core_Root_Dir) -arch $(archType) -mch_file_tag $(MchFileTag) -input_directory $(InputDirectory) -collection_name $(CollectionName) -collection_type $(CollectionType) -max_size 50 # size in MB
displayName: ${{ format('SuperPMI setup ({0})', parameters.osGroup) }}
# Create required directories for merged mch collection and superpmi logs
......@@ -135,7 +125,7 @@ jobs:
- template: /eng/pipelines/coreclr/templates/superpmi-send-to-helix.yml
parameters:
HelixSource: '$(HelixSourcePrefix)/$(Build.Repository.Name)/$(Build.SourceBranch)' # sources must start with pr/, official/, prodcon/, or agent/
HelixType: 'test/superpmi/$(CollectionName)/$(CollectionType)/$(_Framework)/$(Architecture)'
HelixType: 'test/superpmi/$(CollectionName)/$(CollectionType)/$(Architecture)'
HelixAccessToken: $(HelixApiAccessToken)
HelixTargetQueues: $(Queue)
HelixPreCommands: $(HelixPreCommand)
......@@ -143,7 +133,7 @@ jobs:
WorkItemTimeout: 4:00 # 4 hours
WorkItemDirectory: '$(WorkItemDirectory)'
CorrelationPayloadDirectory: '$(CorrelationPayloadDirectory)'
ProjectFile: 'superpmi.proj'
ProjectFile: 'superpmi-collect.proj'
BuildConfig: ${{ parameters.buildConfig }}
osGroup: ${{ parameters.osGroup }}
InputArtifacts: '$(InputArtifacts)'
......@@ -151,7 +141,7 @@ jobs:
CollectionName: '$(CollectionName)'
continueOnError: true # Run the future step i.e. merge-mch step even if this step fails.
# Always run merged step even if collection of some partition fails so we can store collection
# Always run merge step even if collection of some partition fails so we can store collection
# of the partitions that succeeded. If all the partitions fail, merge-mch would fail and we won't
# run future steps like uploading superpmi collection.
- script: $(PythonScript) $(Build.SourcesDirectory)/src/coreclr/scripts/superpmi.py merge-mch -log_level DEBUG -pattern $(MchFilesLocation)$(CollectionName).$(CollectionType)*.mch -output_mch_path $(MergedMchFileLocation)$(CollectionName).$(CollectionType).$(MchFileTag).mch
......@@ -166,10 +156,10 @@ jobs:
archiveType: $(archiveType)
tarCompression: $(tarCompression)
archiveExtension: $(archiveExtension)
artifactName: 'SuperPMI_Collection_$(CollectionName)_$(CollectionType)_$(osGroup)$(osSubgroup)_$(archType)_$(buildConfig)_${{ parameters.runtimeType }}_${{ parameters.codeGenType }}_${{ parameters.runKind }}'
artifactName: 'SuperPMI_Collection_$(CollectionName)_$(CollectionType)_$(osGroup)$(osSubgroup)_$(archType)_$(buildConfig)'
displayName: ${{ format('Upload artifacts SuperPMI {0}-{1} collection', parameters.collectionName, parameters.collectionType) }}
- script: $(PythonScript) $(Build.SourcesDirectory)/src/coreclr/scripts/superpmi.py upload -log_level DEBUG -arch $(archType) -build_type $(buildConfig) -mch_files $(MergedMchFileLocation)$(CollectionName).$(CollectionType).$(MchFileTag).mch -core_root $(Build.SourcesDirectory)/artifacts/bin/coreclr/$(osGroup).x64.$(buildConfigUpper)
- script: $(PythonScript) $(Build.SourcesDirectory)/src/coreclr/scripts/superpmi.py upload -jit_ee_version test_collect -log_level DEBUG -arch $(archType) -build_type $(buildConfig) -mch_files $(MergedMchFileLocation)$(CollectionName).$(CollectionType).$(MchFileTag).mch -core_root $(Build.SourcesDirectory)/artifacts/bin/coreclr/$(osGroup).x64.$(buildConfigUpper)
displayName: ${{ format('Upload SuperPMI {0}-{1} collection to Azure Storage', parameters.collectionName, parameters.collectionType) }}
env:
CLRJIT_AZ_KEY: $(clrjit_key1) # secret key stored as variable in pipeline
......@@ -184,15 +174,15 @@ jobs:
condition: always()
- task: PublishPipelineArtifact@1
displayName: Publish Superpmi logs
displayName: Publish SuperPMI logs
inputs:
targetPath: $(SpmiLogsLocation)
artifactName: 'SuperPMI_Logs_$(CollectionName)_$(CollectionType)_$(osGroup)$(osSubgroup)_$(archType)_$(buildConfig)_${{ parameters.runtimeType }}_${{ parameters.codeGenType }}_${{ parameters.runKind }}'
artifactName: 'SuperPMI_Logs_$(CollectionName)_$(CollectionType)_$(osGroup)$(osSubgroup)_$(archType)_$(buildConfig)'
condition: always()
- task: PublishPipelineArtifact@1
displayName: Publish SuperPMI build logs
inputs:
targetPath: $(Build.SourcesDirectory)/artifacts/log
artifactName: 'SuperPMI_BuildLogs_$(CollectionName)_$(CollectionType)_$(osGroup)$(osSubgroup)_$(archType)_$(buildConfig)_${{ parameters.runtimeType }}_${{ parameters.codeGenType }}_${{ parameters.runKind }}'
artifactName: 'SuperPMI_BuildLogs_$(CollectionName)_$(CollectionType)_$(osGroup)$(osSubgroup)_$(archType)_$(buildConfig)'
condition: always()
......@@ -9,18 +9,12 @@ parameters:
archType: '' # required -- targeting CPU architecture
osGroup: '' # required -- operating system for the job
osSubgroup: '' # optional -- operating system subgroup
extraSetupParameters: '' # optional -- extra arguments to pass to the setup script
frameworks: ['netcoreapp3.0'] # optional -- list of frameworks to run against
continueOnError: 'false' # optional -- determines whether to continue the build if the step errors
dependsOn: '' # optional -- dependencies of the job
timeoutInMinutes: 320 # optional -- timeout for the job
enableTelemetry: false # optional -- enable for telemetry
liveLibrariesBuildConfig: '' # optional -- live-live libraries configuration to use for the run
runtimeType: 'coreclr' # optional -- Sets the runtime as coreclr or mono
codeGenType: 'JIT' # optional -- Decides on the codegen technology if running on mono
runKind: '' # required -- test category
collectionType: ''
collectionName: ''
helixQueues: '' # required -- Helix queues
dependOnEvaluatePaths: false
jobs:
......@@ -35,8 +29,6 @@ jobs:
enableTelemetry: ${{ parameters.enableTelemetry }}
enablePublishBuildArtifacts: true
continueOnError: ${{ parameters.continueOnError }}
collectionType: $ {{ parameters.collectionType }}
collectionName: ${{ parameters.collectionName }}
dependOnEvaluatePaths: ${{ parameters.dependOnEvaluatePaths }}
timeoutInMinutes: ${{ parameters.timeoutInMinutes }}
......@@ -46,6 +38,7 @@ jobs:
displayName: '${{ parameters.jobName }}'
variables:
- ${{ each variable in parameters.variables }}:
- ${{ if ne(variable.name, '') }}:
- name: ${{ variable.name }}
......@@ -69,11 +62,6 @@ jobs:
pool:
${{ parameters.pool }}
container: ${{ parameters.container }}
strategy:
matrix:
${{ each framework in parameters.frameworks }}:
${{ framework }}:
_Framework: ${{ framework }}
steps:
- ${{ parameters.steps }}
......@@ -85,17 +73,18 @@ jobs:
displayName: ${{ format('SuperPMI replay setup ({0} {1})', parameters.osGroup, parameters.archType) }}
# Run superpmi replay in helix
- template: /eng/pipelines/coreclr/templates/superpmi-send-to-helix.yml
- template: /eng/pipelines/common/templates/runtimes/send-to-helix-step.yml
parameters:
HelixSource: '$(HelixSourcePrefix)/$(Build.Repository.Name)/$(Build.SourceBranch)' # sources must start with pr/, official/, prodcon/, or agent/
HelixAccessToken: $(HelixApiAccessToken)
HelixTargetQueues: $(Queue)
HelixPreCommands: $(HelixPreCommand)
Creator: $(Creator)
displayName: 'Send job to Helix'
helixBuild: $(Build.BuildNumber)
helixSource: $(_HelixSource)
helixType: 'build/tests/'
helixQueues: ${{ join(',', parameters.helixQueues) }}
creator: dotnet-bot
WorkItemTimeout: 4:00 # 4 hours
WorkItemDirectory: '$(WorkItemDirectory)'
CorrelationPayloadDirectory: '$(CorrelationPayloadDirectory)'
ProjectFile: 'superpmi-replay.proj'
helixProjectArguments: '$(Build.SourcesDirectory)/src/coreclr/scripts/superpmi-replay.proj'
BuildConfig: ${{ parameters.buildConfig }}
osGroup: ${{ parameters.osGroup }}
archType: ${{ parameters.archType }}
......@@ -111,7 +100,7 @@ jobs:
condition: always()
- task: PublishPipelineArtifact@1
displayName: Publish Superpmi logs
displayName: Publish SuperPMI logs
inputs:
targetPath: $(SpmiLogsLocation)
artifactName: 'SuperPMI_Logs_$(archType)_$(buildConfig)'
......@@ -121,5 +110,5 @@ jobs:
displayName: Publish SuperPMI build logs
inputs:
targetPath: $(Build.SourcesDirectory)/artifacts/log
artifactName: 'SuperPMI_BuildLogs__$(archType)_$(buildConfig)'
condition: always()
\ No newline at end of file
artifactName: 'SuperPMI_BuildLogs_$(archType)_$(buildConfig)'
condition: always()
......@@ -3,62 +3,43 @@ parameters:
archType: ''
osGroup: ''
osSubgroup: ''
container: ''
runtimeVariant: ''
testGroup: ''
framework: net5.0 # Specify the appropriate framework when running release branches (ie netcoreapp3.0 for release/3.0)
liveLibrariesBuildConfig: ''
variables: {}
runtimeType: 'coreclr'
pool: ''
codeGenType: 'JIT'
projetFile: ''
runKind: ''
runJobTemplate: '/eng/pipelines/coreclr/templates/run-superpmi-job.yml'
additionalSetupParameters: ''
runJobTemplate: '/eng/pipelines/coreclr/templates/run-superpmi-collect-job.yml'
collectionType: ''
collectionName: ''
### SuperPMI job
### Each superpmi job depends on a corresponding build job with the same
### buildConfig and archType.
### Each collection job depends on a corresponding build job with the same buildConfig and archType.
jobs:
- template: ${{ parameters.runJobTemplate }}
parameters:
# Compute job name from template parameters
jobName: ${{ format('superpmibuild_{0}{1}_{2}_{3}_{4}_{5}_{6}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.runtimeType, parameters.codeGenType, parameters.runKind) }}
displayName: ${{ format('SuperPMI {7} {8} {0}{1} {2} {3} {4} {5} {6}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.runtimeType, parameters.codeGenType, parameters.runKind, parameters.collectionName, parameters.collectionType) }}
jobName: ${{ format('superpmi_collect_{0}{1}_{2}_{3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
displayName: ${{ format('SuperPMI collect {4} {5} {0}{1} {2} {3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig, parameters.collectionName, parameters.collectionType) }}
pool: ${{ parameters.pool }}
buildConfig: ${{ parameters.buildConfig }}
archType: ${{ parameters.archType }}
osGroup: ${{ parameters.osGroup }}
osSubgroup: ${{ parameters.osSubgroup }}
runtimeVariant: ${{ parameters.runtimeVariant }}
liveLibrariesBuildConfig: ${{ parameters.liveLibrariesBuildConfig }}
runtimeType: ${{ parameters.runtimeType }}
codeGenType: ${{ parameters.codeGenType }}
runKind: ${{ parameters.runKind }}
testGroup: ${{ parameters.testGroup }}
collectionType: ${{ parameters.collectionType }}
collectionName: ${{ parameters.collectionName }}
additionalSetupParameters: ${{ parameters.additionalSetupParameters }}
# Test job depends on the corresponding build job
dependsOn:
- ${{ format('coreclr_{0}_product_build_{1}{2}_{3}_{4}', parameters.runtimeVariant, parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
- ${{ format('coreclr__product_build_{0}{1}_{2}_{3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
# Depend on coreclr x64 so we can download it and use mcs.exe from it while publishing non-x64 arch SPMI collection
- ${{ if ne(parameters.archType, 'x64') }}:
- ${{ format('coreclr_{0}_product_build_{1}{2}_x64_{3}', parameters.runtimeVariant, parameters.osGroup, parameters.osSubgroup, parameters.buildConfig) }}
- ${{ format('coreclr__product_build_{0}{1}_x64_{3}', parameters.osGroup, parameters.osSubgroup, parameters.buildConfig) }}
- ${{ if ne(parameters.liveLibrariesBuildConfig, '') }}:
- ${{ format('libraries_build_{0}{1}_{2}_{3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.liveLibrariesBuildConfig) }}
- ${{ if eq(parameters.collectionName, 'coreclr_tests') }}:
- '${{ parameters.runtimeType }}_common_test_build_p1_AnyOS_AnyCPU_${{parameters.buildConfig }}'
- 'coreclr_common_test_build_p1_AnyOS_AnyCPU_${{parameters.buildConfig }}'
variables: ${{ parameters.variables }}
frameworks:
- ${{ parameters.framework }}
steps:
# Extra steps that will be passed to the superpmi template and run before sending the job to helix (all of which is done in the template)
......@@ -85,8 +66,8 @@ jobs:
- template: /eng/pipelines/common/download-artifact-step.yml
parameters:
unpackFolder: '$(Build.SourcesDirectory)/artifacts/bin/coreclr/$(osGroup).x64.$(buildConfigUpper)'
artifactFileName: 'CoreCLRProduct__${{ parameters.runtimeVariant }}_$(osGroup)$(osSubgroup)_x64_$(buildConfig)$(archiveExtension)'
artifactName: 'CoreCLRProduct__${{ parameters.runtimeVariant }}_$(osGroup)$(osSubgroup)_x64_$(buildConfig)'
artifactFileName: 'CoreCLRProduct___$(osGroup)$(osSubgroup)_x64_$(buildConfig)$(archiveExtension)'
artifactName: 'CoreCLRProduct___$(osGroup)$(osSubgroup)_x64_$(buildConfig)'
displayName: 'Coreclr product build (x64)'
# Download and unzip managed test artifacts
......
......@@ -4,18 +4,17 @@ parameters:
osGroup: '' # required -- operating system for the job
osSubgroup: '' # optional -- operating system subgroup
pool: ''
stagedBuild: false
timeoutInMinutes: 320 # build timeout
framework: net5.0 # Specify the appropriate framework when running release branches (ie netcoreapp3.0 for release/3.0)
variables: {}
helixQueues: ''
dependOnEvaluatePaths: false
runJobTemplate: '/eng/pipelines/coreclr/templates/run-superpmi-replay-job.yml'
jobs:
- template: ${{ parameters.runJobTemplate }}
parameters:
jobName: ${{ format('superpmibuild_{0}{1}_{2}_{3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
displayName: ${{ format('SuperPMI replay {0} {1}', parameters.osGroup, parameters.archType) }}
jobName: ${{ format('superpmi_replay_{0}{1}_{2}_{3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
displayName: ${{ format('SuperPMI replay {0}{1} {2} {3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
pool: ${{ parameters.pool }}
buildConfig: ${{ parameters.buildConfig }}
archType: ${{ parameters.archType }}
......@@ -23,15 +22,12 @@ jobs:
osSubgroup: ${{ parameters.osSubgroup }}
dependOnEvaluatePaths: ${{ parameters.dependOnEvaluatePaths }}
timeoutInMinutes: ${{ parameters.timeoutInMinutes }}
additionalSetupParameters: ${{ parameters.additionalSetupParameters }}
helixQueues: ${{ parameters.helixQueues }}
dependsOn:
- ${{ format('coreclr_jit_build_{0}{1}_{2}_{3}', parameters.osGroup, parameters.osSubgroup, parameters.archType, parameters.buildConfig) }}
variables: ${{ parameters.variables }}
frameworks:
- ${{ parameters.framework }}
steps:
# Download jit builds
......
......@@ -20,8 +20,7 @@ from os import path, walk
from os.path import getsize
import os
from coreclr_arguments import *
from superpmi_setup import run_command
from superpmi import TempDir
from azdo_pipelines_util import run_command, TempDir
parser = argparse.ArgumentParser(description="description")
......
......@@ -19,8 +19,7 @@ from os import path
import os
from os import listdir
from coreclr_arguments import *
from superpmi_setup import run_command, copy_directory, set_pipeline_variable
from superpmi import ChangeDir, TempDir
from azdo_pipelines_util import run_command, copy_directory, set_pipeline_variable, ChangeDir, TempDir
import tempfile
parser = argparse.ArgumentParser(description="description")
......@@ -88,7 +87,6 @@ def main(main_args):
helix_source_prefix = "official"
creator = ""
ci = True
# create exploratory directory
print('Copying {} -> {}'.format(scripts_src_directory, coreroot_dst_directory))
......
#!/usr/bin/env python3
#
# Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the MIT license.
#
# Title : azdo_pipelines_util.py
#
# Notes:
#
# Utility functions used by Python scripts involved with Azure DevOps Pipelines
# setup.
#
################################################################################
################################################################################
import os
import shutil
import subprocess
import sys
import tempfile
def run_command(command_to_run, _cwd=None, _exit_on_fail=False, _output_file=None):
""" Runs the command.
Args:
command_to_run ([string]): Command to run along with arguments.
_cwd (string): Current working directory.
_exit_on_fail (bool): If it should exit on failure.
Returns:
(string, string, int): Returns a tuple of stdout, stderr, and command return code if _output_file= None
Otherwise stdout, stderr are empty.
"""
print("Running: " + " ".join(command_to_run))
command_stdout = ""
command_stderr = ""
return_code = 1
output_type = subprocess.STDOUT if _output_file else subprocess.PIPE
with subprocess.Popen(command_to_run, stdout=subprocess.PIPE, stderr=output_type, cwd=_cwd) as proc:
# For long running command, continuously print the output
if _output_file:
while True:
with open(_output_file, 'a') as of:
output = proc.stdout.readline()
if proc.poll() is not None:
break
if output:
of.write(output.strip().decode("utf-8") + "\n")
else:
command_stdout, command_stderr = proc.communicate()
if len(command_stdout) > 0:
print(command_stdout.decode("utf-8"))
if len(command_stderr) > 0:
print(command_stderr.decode("utf-8"))
return_code = proc.returncode
if _exit_on_fail and return_code != 0:
print("Command failed. Exiting.")
sys.exit(1)
return command_stdout, command_stderr, return_code
def copy_directory(src_path, dst_path, verbose_output=True, match_func=lambda path: True):
"""Copies directory in 'src_path' to 'dst_path' maintaining the directory
structure. https://docs.python.org/3.5/library/shutil.html#shutil.copytree can't
be used in this case because it expects the destination directory should not
exist, however we do call copy_directory() to copy files to same destination directory.
Args:
src_path (string): Path of source directory that need to be copied.
dst_path (string): Path where directory should be copied.
verbose_output (bool): True to print every copy or skipped file.
match_func (str -> bool) : Criteria function determining if a file is copied.
"""
if not os.path.exists(dst_path):
os.makedirs(dst_path)
for item in os.listdir(src_path):
src_item = os.path.join(src_path, item)
dst_item = os.path.join(dst_path, item)
if os.path.isdir(src_item):
copy_directory(src_item, dst_item, verbose_output, match_func)
else:
try:
if match_func(src_item):
if verbose_output:
print("> copy {0} => {1}".format(src_item, dst_item))
try:
shutil.copy2(src_item, dst_item)
except PermissionError as pe_error:
print('Ignoring PermissionError: {0}'.format(pe_error))
else:
if verbose_output:
print("> skipping {0}".format(src_item))
except UnicodeEncodeError:
if verbose_output:
print("> Got UnicodeEncodeError")
def copy_files(src_path, dst_path, file_names):
"""Copy files from 'file_names' list from 'src_path' to 'dst_path'.
It retains the original directory structure of src_path.
Args:
src_path (string): Source directory from where files are copied.
dst_path (string): Destination directory where files to be copied.
file_names ([string]): List of full path file names to be copied.
"""
print('### Copying below files from {0} to {1}:'.format(src_path, dst_path))
print('')
print(os.linesep.join(file_names))
for f in file_names:
# Create same structure in dst so we don't clobber same files names present in different directories
dst_path_of_file = f.replace(src_path, dst_path)
dst_directory = os.path.dirname(dst_path_of_file)
if not os.path.exists(dst_directory):
os.makedirs(dst_directory)
try:
shutil.copy2(f, dst_path_of_file)
except PermissionError as pe_error:
print('Ignoring PermissionError: {0}'.format(pe_error))
def set_pipeline_variable(name, value):
""" This method sets pipeline variable.
Args:
name (string): Name of the variable.
value (string): Value of the variable.
"""
define_variable_format = "##vso[task.setvariable variable={0}]{1}"
print("{0} -> {1}".format(name, value)) # logging
print(define_variable_format.format(name, value)) # set variable
class TempDir:
""" Class to create a temporary working directory, or use one that is passed as an argument.
Use with: "with TempDir() as temp_dir" to change to that directory and then automatically
change back to the original working directory afterwards and remove the temporary
directory and its contents (if skip_cleanup is False).
"""
def __init__(self, path=None, skip_cleanup=False):
self.mydir = tempfile.mkdtemp() if path is None else path
self.cwd = None
self._skip_cleanup = skip_cleanup
def __enter__(self):
self.cwd = os.getcwd()
os.chdir(self.mydir)
return self.mydir
def __exit__(self, exc_type, exc_val, exc_tb):
os.chdir(self.cwd)
if not self._skip_cleanup:
shutil.rmtree(self.mydir)
class ChangeDir:
""" Class to temporarily change to a given directory. Use with "with".
"""
def __init__(self, mydir):
self.mydir = mydir
self.cwd = None
def __enter__(self):
self.cwd = os.getcwd()
os.chdir(self.mydir)
def __exit__(self, exc_type, exc_val, exc_tb):
os.chdir(self.cwd)
\ No newline at end of file
......@@ -30,10 +30,19 @@
<SuperpmiLogsLocation>%HELIX_WORKITEM_UPLOAD_ROOT%</SuperpmiLogsLocation>
<!-- Workaround until https://github.com/dotnet/arcade/pull/6179 is not available -->
<HelixResultsDestinationDir>$(BUILD_SOURCESDIRECTORY)\artifacts\helixresults</HelixResultsDestinationDir>
<WorkItemCommand>$(Python) $(ProductDirectory)\superpmi-replay.py -jit_directory $(ProductDirectory)</WorkItemCommand>
<WorkItemCommand>$(Python) $(ProductDirectory)\superpmi_replay.py -jit_directory $(ProductDirectory)</WorkItemCommand>
<WorkItemTimeout>5:00</WorkItemTimeout>
</PropertyGroup>
<PropertyGroup>
<EnableAzurePipelinesReporter>false</EnableAzurePipelinesReporter>
<EnableXUnitReporter>false</EnableXUnitReporter>
<WorkItemTimeout>5:00</WorkItemTimeout>
<Creator>$(_Creator)</Creator>
<HelixAccessToken>$(_HelixAccessToken)</HelixAccessToken>
<HelixBuild>$(_HelixBuild)</HelixBuild>
<HelixSource>$(_HelixSource)</HelixSource>
<HelixTargetQueues>$(_HelixTargetQueues)</HelixTargetQueues>
<HelixType>$(_HelixType)</HelixType>
</PropertyGroup>
<ItemGroup>
......
......@@ -20,8 +20,7 @@ from os import path
from os.path import isfile
from shutil import copyfile
from coreclr_arguments import *
from superpmi import ChangeDir, TempDir
from superpmi_setup import run_command
from azdo_pipelines_util import run_command, ChangeDir, TempDir
# Start of parser object creation.
is_windows = platform.system() == "Windows"
......
......@@ -3,8 +3,7 @@
# Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the MIT license.
#
##
# Title : superpmi_setup.py
# Title : superpmi_collect_setup.py
#
# Notes:
#
......@@ -21,7 +20,7 @@
# `CORE_ROOT` folder and this script will copy `max_size` bytes of those files under `payload/libraries/0/binaries`,
# `payload/libraries/1/binaries` and so forth.
# 4. Lastly, it sets the pipeline variables.
#
# Below are the helix queues it sets depending on the OS/architecture:
# | Arch | windows | Linux |
# |-------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------|
......@@ -29,20 +28,17 @@
# | x64 | Windows.10.Amd64.X86 | Ubuntu.1804.Amd64 |
# | arm | - | (Ubuntu.1804.Arm32)Ubuntu.1804.Armarch@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440 |
# | arm64 | Windows.10.Arm64 | (Ubuntu.1804.Arm64)Ubuntu.1804.ArmArch@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8-20210531091519-97d8652 |
#
################################################################################
################################################################################
import argparse
import shutil
import os
import stat
import subprocess
import tempfile
from os import linesep, listdir, path, walk
from os.path import isfile, join, getsize
from coreclr_arguments import *
from superpmi import ChangeDir
from azdo_pipelines_util import run_command, copy_directory, copy_files, set_pipeline_variable, ChangeDir, TempDir
# Start of parser object creation.
......@@ -251,7 +247,7 @@ def get_files_sorted_by_size(src_directory, exclude_directories, exclude_files):
filename_with_size = []
for file_path, dirs, files in walk(src_directory, topdown=True):
for file_path, dirs, files in os.walk(src_directory, topdown=True):
# Credit: https://stackoverflow.com/a/19859907
dirs[:] = [d for d in dirs if d not in exclude_directories]
for name in files:
......@@ -259,14 +255,14 @@ def get_files_sorted_by_size(src_directory, exclude_directories, exclude_files):
exclude_files_lower = [filename.lower() for filename in exclude_files]
if name.lower() in exclude_files_lower:
continue
curr_file_path = path.join(file_path, name)
curr_file_path = os.path.join(file_path, name)
if not isfile(curr_file_path):
if not os.path.isfile(curr_file_path):
continue
if not name.endswith(".dll") and not name.endswith(".exe"):
continue
size = getsize(curr_file_path)
size = os.path.getsize(curr_file_path)
filename_with_size.append((curr_file_path, size))
return sorter_by_size(filename_with_size)
......@@ -312,110 +308,6 @@ def first_fit(sorted_by_size, max_size):
return partitions
def run_command(command_to_run, _cwd=None, _exit_on_fail=False, _output_file=None):
""" Runs the command.
Args:
command_to_run ([string]): Command to run along with arguments.
_cwd (string): Current working directory.
_exit_on_fail (bool): If it should exit on failure.
Returns:
(string, string, int): Returns a tuple of stdout, stderr, and command return code if _output_file= None
Otherwise stdout, stderr are empty.
"""
print("Running: " + " ".join(command_to_run))
command_stdout = ""
command_stderr = ""
return_code = 1
output_type = subprocess.STDOUT if _output_file else subprocess.PIPE
with subprocess.Popen(command_to_run, stdout=subprocess.PIPE, stderr=output_type, cwd=_cwd) as proc:
# For long running command, continuously print the output
if _output_file:
while True:
with open(_output_file, 'a') as of:
output = proc.stdout.readline()
if proc.poll() is not None:
break
if output:
of.write(output.strip().decode("utf-8") + "\n")
else:
command_stdout, command_stderr = proc.communicate()
if len(command_stdout) > 0:
print(command_stdout.decode("utf-8"))
if len(command_stderr) > 0:
print(command_stderr.decode("utf-8"))
return_code = proc.returncode
if _exit_on_fail and return_code != 0:
print("Command failed. Exiting.")
sys.exit(1)
return command_stdout, command_stderr, return_code
def copy_directory(src_path, dst_path, verbose_output=True, match_func=lambda path: True):
"""Copies directory in 'src_path' to 'dst_path' maintaining the directory
structure. https://docs.python.org/3.5/library/shutil.html#shutil.copytree can't
be used in this case because it expects the destination directory should not
exist, however we do call copy_directory() to copy files to same destination directory.
Args:
src_path (string): Path of source directory that need to be copied.
dst_path (string): Path where directory should be copied.
verbose_output (bool): True to print every copy or skipped file.
match_func (str -> bool) : Criteria function determining if a file is copied.
"""
if not os.path.exists(dst_path):
os.makedirs(dst_path)
for item in os.listdir(src_path):
src_item = os.path.join(src_path, item)
dst_item = os.path.join(dst_path, item)
if os.path.isdir(src_item):
copy_directory(src_item, dst_item, verbose_output, match_func)
else:
try:
if match_func(src_item):
if verbose_output:
print("> copy {0} => {1}".format(src_item, dst_item))
try:
shutil.copy2(src_item, dst_item)
except PermissionError as pe_error:
print('Ignoring PermissionError: {0}'.format(pe_error))
else:
if verbose_output:
print("> skipping {0}".format(src_item))
except UnicodeEncodeError:
if verbose_output:
print("> Got UnicodeEncodeError")
def copy_files(src_path, dst_path, file_names):
"""Copy files from 'file_names' list from 'src_path' to 'dst_path'.
It retains the original directory structure of src_path.
Args:
src_path (string): Source directory from where files are copied.
dst_path (string): Destination directory where files to be copied.
file_names ([string]): List of full path file names to be copied.
"""
print('### Copying below files from {0} to {1}:'.format(src_path, dst_path))
print('')
print(os.linesep.join(file_names))
for f in file_names:
# Create same structure in dst so we don't clobber same files names present in different directories
dst_path_of_file = f.replace(src_path, dst_path)
dst_directory = path.dirname(dst_path_of_file)
if not os.path.exists(dst_directory):
os.makedirs(dst_directory)
try:
shutil.copy2(f, dst_path_of_file)
except PermissionError as pe_error:
print('Ignoring PermissionError: {0}'.format(pe_error))
def partition_files(src_directory, dst_directory, max_size, exclude_directories=[],
exclude_files=native_binaries_to_ignore):
""" Copy bucketized files based on size to destination folder.
......@@ -435,7 +327,7 @@ def partition_files(src_directory, dst_directory, max_size, exclude_directories=
index = 0
for p_index in partitions:
file_names = [curr_file[0] for curr_file in partitions[p_index]]
curr_dst_path = path.join(dst_directory, str(index), "binaries")
curr_dst_path = os.path.join(dst_directory, str(index), "binaries")
copy_files(src_directory, curr_dst_path, file_names)
index += 1
......@@ -447,7 +339,7 @@ def setup_microbenchmark(workitem_directory, arch):
workitem_directory (string): Path to work
arch (string): Architecture for which dotnet will be installed
"""
performance_directory = path.join(workitem_directory, "performance")
performance_directory = os.path.join(workitem_directory, "performance")
run_command(
["git", "clone", "--quiet", "--depth", "1", "https://github.com/dotnet/performance", performance_directory])
......@@ -456,7 +348,7 @@ def setup_microbenchmark(workitem_directory, arch):
dotnet_directory = os.path.join(performance_directory, "tools", "dotnet", arch)
dotnet_install_script = os.path.join(performance_directory, "scripts", "dotnet.py")
if not isfile(dotnet_install_script):
if not os.path.isfile(dotnet_install_script):
print("Missing " + dotnet_install_script)
return
......@@ -477,18 +369,6 @@ def get_python_name():
return ["python3"]
def set_pipeline_variable(name, value):
""" This method sets pipeline variable.
Args:
name (string): Name of the variable.
value (string): Value of the variable.
"""
define_variable_format = "##vso[task.setvariable variable={0}]{1}"
print("{0} -> {1}".format(name, value)) # logging
print(define_variable_format.format(name, value)) # set variable
def main(main_args):
""" Main entrypoint
......@@ -499,9 +379,9 @@ def main(main_args):
source_directory = coreclr_args.source_directory
# CorrelationPayload directories
correlation_payload_directory = path.join(coreclr_args.source_directory, "payload")
superpmi_src_directory = path.join(source_directory, 'src', 'coreclr', 'scripts')
superpmi_dst_directory = path.join(correlation_payload_directory, "superpmi")
correlation_payload_directory = os.path.join(coreclr_args.source_directory, "payload")
superpmi_src_directory = os.path.join(source_directory, 'src', 'coreclr', 'scripts')
superpmi_dst_directory = os.path.join(correlation_payload_directory, "superpmi")
arch = coreclr_args.arch
helix_source_prefix = "official"
creator = ""
......@@ -547,7 +427,7 @@ def main(main_args):
print("Inside make_readable")
run_command(["ls", "-l", folder_name])
for file_path, dirs, files in walk(folder_name, topdown=True):
for file_path, dirs, files in os.walk(folder_name, topdown=True):
for d in dirs:
os.chmod(os.path.join(file_path, d),
# read+write+execute for owner
......@@ -571,7 +451,7 @@ def main(main_args):
copy_directory(coreclr_args.input_directory, superpmi_dst_directory, match_func=acceptable_copy)
# Workitem directories
workitem_directory = path.join(source_directory, "workitem")
workitem_directory = os.path.join(source_directory, "workitem")
input_artifacts = ""
if coreclr_args.collection_name == "benchmarks":
......@@ -582,21 +462,21 @@ def main(main_args):
# Clone and build jitutils
try:
with tempfile.TemporaryDirectory() as jitutils_directory:
with TempDir() as jitutils_directory:
run_command(
["git", "clone", "--quiet", "--depth", "1", "https://github.com/dotnet/jitutils", jitutils_directory])
# Make sure ".dotnet" directory exists, by running the script at least once
dotnet_script_name = "dotnet.cmd" if is_windows else "dotnet.sh"
dotnet_script_path = path.join(source_directory, dotnet_script_name)
dotnet_script_path = os.path.join(source_directory, dotnet_script_name)
run_command([dotnet_script_path, "--info"], jitutils_directory)
# Set dotnet path to run build
os.environ["PATH"] = path.join(source_directory, ".dotnet") + os.pathsep + os.environ["PATH"]
os.environ["PATH"] = os.path.join(source_directory, ".dotnet") + os.pathsep + os.environ["PATH"]
build_file = "build.cmd" if is_windows else "build.sh"
run_command([path.join(jitutils_directory, build_file), "-p"], jitutils_directory)
run_command([os.path.join(jitutils_directory, build_file), "-p"], jitutils_directory)
copy_files(path.join(jitutils_directory, "bin"), superpmi_dst_directory, [path.join(jitutils_directory, "bin", "pmi.dll")])
copy_files(os.path.join(jitutils_directory, "bin"), superpmi_dst_directory, [os.path.join(jitutils_directory, "bin", "pmi.dll")])
except PermissionError as pe_error:
# Details: https://bugs.python.org/issue26660
print('Ignoring PermissionError: {0}'.format(pe_error))
......@@ -607,14 +487,14 @@ def main(main_args):
# # Copy ".dotnet" to correlation_payload_directory for crossgen2 job; it is needed to invoke crossgen2.dll
# if coreclr_args.collection_type == "crossgen2":
# dotnet_src_directory = path.join(source_directory, ".dotnet")
# dotnet_dst_directory = path.join(correlation_payload_directory, ".dotnet")
# dotnet_src_directory = os.path.join(source_directory, ".dotnet")
# dotnet_dst_directory = os.path.join(correlation_payload_directory, ".dotnet")
# print('Copying {} -> {}'.format(dotnet_src_directory, dotnet_dst_directory))
# copy_directory(dotnet_src_directory, dotnet_dst_directory, verbose_output=False)
# payload
pmiassemblies_directory = path.join(workitem_directory, "pmiAssembliesDirectory")
input_artifacts = path.join(pmiassemblies_directory, coreclr_args.collection_name)
pmiassemblies_directory = os.path.join(workitem_directory, "pmiAssembliesDirectory")
input_artifacts = os.path.join(pmiassemblies_directory, coreclr_args.collection_name)
exclude_directory = ['Core_Root'] if coreclr_args.collection_name == "coreclr_tests" else []
exclude_files = native_binaries_to_ignore
if coreclr_args.collection_type == "crossgen2":
......@@ -626,7 +506,7 @@ def main(main_args):
# libraries_tests artifacts contains files from core_root folder. Exclude them.
core_root_dir = coreclr_args.core_root_directory
exclude_files += [item for item in os.listdir(core_root_dir)
if isfile(join(core_root_dir, item)) and (item.endswith(".dll") or item.endswith(".exe"))]
if os.path.isfile(os.path.join(core_root_dir, item)) and (item.endswith(".dll") or item.endswith(".exe"))]
partition_files(coreclr_args.input_directory, input_artifacts, coreclr_args.max_size, exclude_directory,
exclude_files)
......
......@@ -3,22 +3,19 @@
# Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the MIT license.
#
##
# Title : superpmi_setup.py
# Title : superpmi_replay.py
#
# Notes:
#
# Script to run "superpmi replay" for various collections under various COMPlus_JitStressRegs value.
# Script to run "superpmi replay" for various collections under various COMPlus_JitStressRegs values.
#
################################################################################
################################################################################
import argparse
from os import path
import os
from os import listdir
from coreclr_arguments import *
from superpmi_setup import run_command
from azdo_pipelines_util import run_command
parser = argparse.ArgumentParser(description="description")
......@@ -86,40 +83,47 @@ def main(main_args):
python_path = sys.executable
cwd = os.path.dirname(os.path.realpath(__file__))
coreclr_args = setup_args(main_args)
spmi_location = path.join(cwd, "artifacts", "spmi")
spmi_location = os.path.join(cwd, "artifacts", "spmi")
log_directory = coreclr_args.log_directory
platform_name = coreclr_args.platform
os_name = "win" if platform_name.lower() == "windows" else "unix"
arch_name = coreclr_args.arch
host_arch_name = "x64" if arch_name.endswith("64") else "x86"
os_name = "universal" if arch_name.startswith("arm") else os_name
jit_path = path.join(coreclr_args.jit_directory, 'clrjit_{}_{}_{}.dll'.format(os_name, arch_name, host_arch_name))
jit_path = os.path.join(coreclr_args.jit_directory, 'clrjit_{}_{}_{}.dll'.format(os_name, arch_name, host_arch_name))
print("Running superpmi.py download")
run_command([python_path, path.join(cwd, "superpmi.py"), "download", "--no_progress", "-target_os", platform_name,
run_command([python_path, os.path.join(cwd, "superpmi.py"), "download", "--no_progress", "-target_os", platform_name,
"-target_arch", arch_name, "-core_root", cwd, "-spmi_location", spmi_location], _exit_on_fail=True)
failed_runs = []
for jit_flag in jit_flags:
log_file = path.join(log_directory, 'superpmi_{}.log'.format(jit_flag.replace("=", "_")))
log_file = os.path.join(log_directory, 'superpmi_{}.log'.format(jit_flag.replace("=", "_")))
print("Running superpmi.py replay for {}".format(jit_flag))
_, _, return_code = run_command([
python_path, path.join(cwd, "superpmi.py"), "replay", "-core_root", cwd,
"-jitoption", jit_flag, "-jitoption", "TieredCompilation=0",
"-target_os", platform_name, "-target_arch", arch_name,
python_path,
os.path.join(cwd, "superpmi.py"),
"replay",
"-core_root", cwd,
"-jitoption", jit_flag,
"-jitoption", "TieredCompilation=0",
"-target_os", platform_name,
"-target_arch", arch_name,
"-arch", host_arch_name,
"-jit_path", jit_path, "-spmi_location", spmi_location,
"-log_level", "debug", "-log_file", log_file])
"-jit_path", jit_path,
"-spmi_location", spmi_location,
"-log_level", "debug",
"-log_file", log_file])
if return_code != 0:
failed_runs.append("Failure in {}".format(log_file))
# Consolidate all superpmi_*.logs in superpmi_platform_architecture.log
final_log_name = path.join(log_directory, "superpmi_{}_{}.log".format(platform_name, arch_name))
final_log_name = os.path.join(log_directory, "superpmi_{}_{}.log".format(platform_name, arch_name))
print("Consolidating final {}".format(final_log_name))
with open(final_log_name, "a") as final_superpmi_log:
for superpmi_log in listdir(log_directory):
for superpmi_log in os.listdir(log_directory):
if not superpmi_log.startswith("superpmi_Jit") or not superpmi_log.endswith(".log"):
continue
......@@ -127,7 +131,7 @@ def main(main_args):
final_superpmi_log.write("======================================================={}".format(os.linesep))
final_superpmi_log.write("Contents from {}{}".format(superpmi_log, os.linesep))
final_superpmi_log.write("======================================================={}".format(os.linesep))
with open(path.join(log_directory, superpmi_log), "r") as current_superpmi_log:
with open(os.path.join(log_directory, superpmi_log), "r") as current_superpmi_log:
contents = current_superpmi_log.read()
final_superpmi_log.write(contents)
......
......@@ -3,27 +3,21 @@
# Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the MIT license.
#
##
# Title : superpmi_replay_setup.py
#
# Notes:
#
# Script to setup directory structure required to perform SuperPMI replay in CI.
# It creates `correlation_payload_directory` that contains clrjit*_x64.dll and clrjit*_x86.dll
# It creates `correlation_payload_directory` that contains clrjit*_x64.dll and clrjit*_x86.dll
#
################################################################################
################################################################################
import argparse
from os import path, walk
import os
import shutil
import stat
import subprocess
import tempfile
from os.path import isfile, join
from coreclr_arguments import *
from superpmi_setup import copy_directory, copy_files, set_pipeline_variable, run_command
from azdo_pipelines_util import copy_directory, copy_files, set_pipeline_variable
parser = argparse.ArgumentParser(description="description")
......@@ -63,38 +57,10 @@ def setup_args(args):
return coreclr_args
def partition_mch(mch_directory, dst_directory):
from os import listdir
print("Inside partition_mch")
mch_zip_files = []
for file_path, dirs, files in walk(mch_directory, topdown=True):
for name in files:
curr_file_path = path.join(file_path, name)
if not isfile(curr_file_path):
continue
if not name.endswith(".mch.zip"):
continue
mch_zip_files.append(curr_file_path)
index = 1
for mch_file in mch_zip_files:
print("Processing {}".format(mch_file))
file_names = []
file_names += [mch_file]
file_names += [mch_file.replace(".mch.zip", ".mch.mct.zip")]
curr_dst_path = path.join(dst_directory, "partitions", str(index))
copy_files(mch_directory, curr_dst_path, file_names)
index += 1
def match_correlation_files(full_path):
file_name = os.path.basename(full_path)
if file_name.startswith("clrjit_") and file_name.endswith(".dll") and file_name.find(
"osx") == -1:
if file_name.startswith("clrjit_") and file_name.endswith(".dll") and file_name.find("osx") == -1:
return True
if file_name == "superpmi.exe" or file_name == "mcs.exe":
......@@ -116,13 +82,11 @@ def main(main_args):
product_directory = coreclr_args.product_directory
# CorrelationPayload directories
correlation_payload_directory = path.join(coreclr_args.source_directory, "payload")
superpmi_src_directory = path.join(source_directory, 'src', 'coreclr', 'scripts')
correlation_payload_directory = os.path.join(source_directory, "payload")
superpmi_src_directory = os.path.join(source_directory, 'src', 'coreclr', 'scripts')
helix_source_prefix = "official"
creator = ""
ci = True
helix_queue = "Windows.10.Amd64.X86"
# Copy *.py to CorrelationPayload
print('Copying {} -> {}'.format(superpmi_src_directory, correlation_payload_directory))
......@@ -130,7 +94,7 @@ def main(main_args):
match_func=lambda path: any(path.endswith(extension) for extension in [".py"]))
# Copy clrjit*_arch.dll binaries to CorrelationPayload
print('Copying binaries {} -> {}'.format(arch, product_directory, correlation_payload_directory))
print('Copying binaries {} -> {}'.format(product_directory, correlation_payload_directory))
copy_directory(product_directory, correlation_payload_directory, match_func=match_correlation_files)
# Set variables
......@@ -138,7 +102,6 @@ def main(main_args):
set_pipeline_variable("CorrelationPayloadDirectory", correlation_payload_directory)
set_pipeline_variable("Architecture", arch)
set_pipeline_variable("Creator", creator)
set_pipeline_variable("Queue", helix_queue)
set_pipeline_variable("HelixSourcePrefix", helix_source_prefix)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册