提交 be5764c4 编写于 作者: I Ian Rogers 提交者: Arnaldo Carvalho de Melo

perf vendor events: Update TremontX

Note 01.org has no TremontX directory but in mapfile.csv the family and
model:
...
GenuineIntel-6-86,V1.17,/SNR/snowridgex_core_v1.17.json,core,,,
GenuineIntel-6-86,V1.17,/SNR/snowridgex_uncore_v1.17.json,uncore,,,
...
match TremontX in the perf mapfile.csv:
...
GenuineIntel-6-86,v1,tremontx,core
...

Events are at version 1.17:
    https://download.01.org/perfmon/SNR
Json files generated by the latest code at:
    https://github.com/intel/event-converter-for-linux-perf

floating-point.json is added.

Tested:

Not tested on a SnowridgeX/TremontX, on a SkylakeX:

  ...
    9: Parse perf pmu format                                           : Ok
   10: PMU events                                                      :
   10.1: PMU event table sanity                                        : Ok
   10.2: PMU event map aliases                                         : Ok
   10.3: Parsing of PMU event table metrics                            : Ok
   10.4: Parsing of PMU event table metrics with fake PMUs             : Ok
  ...
Reviewed-by: NKan Liang <kan.liang@linux.intel.com>
Signed-off-by: NIan Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Clark <james.clark@arm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Zhengjun Xing <zhengjun.xing@linux.intel.com>
Link: https://lore.kernel.org/r/20220201015858.1226914-27-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
上级 4ad91126
[
{
"BriefDescription": "Counts the number of core requests (demand and L1 prefetchers) rejected by the L2 queue (L2Q) due to a full condition.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "CORE_REJECT_L2Q.ANY",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of (demand and L1 prefetchers) core requests rejected by the L2 queue (L2Q) due to a full or nearly full condition, which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the External Queue (XQ), but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to ensure fairness between cores, or to delay a cores dirty eviction when the address conflicts incoming external snoops. (Note that L2 prefetcher requests that are dropped are not counted by this event). Counts on a per core basis.",
"SampleAfterValue": "200003"
},
{
"BriefDescription": "Counts the number of first level data cacheline (dirty) evictions caused by misses, stores, and prefetches.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "DL1.DIRTY_EVICTION",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of first level data cacheline (dirty) evictions caused by misses, stores, and prefetches. Does not count evictions or dirty writebacks caused by snoops. Does not count a replacement unless a (dirty) line was written back.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition.",
"CollectPEBSRecord": "2",
"PublicDescription": "Counts cacheable memory requests that miss in the the Last Level Cache. Requests include Demand Loads, Reads for Ownership(RFO), Instruction fetches and L1 HW prefetches. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2.",
"EventCode": "0x2e",
"Counter": "0,1,2,3",
"UMask": "0x41",
"EventCode": "0x30",
"EventName": "L2_REJECT_XQ.ANY",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition which likely indicates back pressure from the IDI link. The XQ may reject transactions from the L2Q (non-cacheable requests), BBL (L2 misses) and WOB (L2 write-back victims).",
"SampleAfterValue": "200003"
},
{
"BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"BriefDescription": "Counts memory requests originating from the core that miss in the last level cache. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2."
"UMask": "0x41"
},
{
"BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
"CollectPEBSRecord": "2",
"PublicDescription": "Counts cacheable memory requests that access the Last Level Cache. Requests include Demand Loads, Reads for Ownership(RFO), Instruction fetches and L1 HW prefetches. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2.",
"Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x4f"
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
"Counter": "0,1,2,3",
"UMask": "0x4f",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH",
"PEBScounters": "0,1,2,3",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"SampleAfterValue": "200003",
"UMask": "0x38"
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in DRAM or MMIO (Non-DRAM).",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_DRAM_HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of cycles a core is stalled due to an instruction cache or translation lookaside buffer (TLB) access which hit in DRAM or MMIO (non-DRAM).",
"SampleAfterValue": "200003",
"BriefDescription": "Counts memory requests originating from the core that reference a cache line in the last level cache. If the platform has an L3 cache, last level cache is the L3, otherwise it is the L2."
"UMask": "0x20"
},
{
"PEBS": "1",
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
"CollectPEBSRecord": "2",
"PublicDescription": "Counts the number of load uops retired. This event is Precise Event capable",
"EventCode": "0xd0",
"Counter": "0,1,2,3",
"UMask": "0x81",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_L2_HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
"PublicDescription": "Counts the number of cycles a core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) access which hit in the L2 cache.",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of load uops retired.",
"Data_LA": "1"
"UMask": "0x8"
},
{
"PEBS": "1",
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the LLC or other core with HITE/F/M.",
"CollectPEBSRecord": "2",
"PublicDescription": "Counts the number of store uops retired. This event is Precise Event capable",
"EventCode": "0xd0",
"Counter": "0,1,2,3",
"UMask": "0x82",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_LLC_HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
"PublicDescription": "Counts the number of cycles a core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) access which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of store uops retired.",
"Data_LA": "1"
"UMask": "0x10"
},
{
"PEBS": "1",
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
"CollectPEBSRecord": "2",
"EventCode": "0xd1",
"Counter": "0,1,2,3",
"UMask": "0x1",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD",
"PEBScounters": "0,1,2,3",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of load uops retired that hit the level 1 data cache",
"Data_LA": "1"
"UMask": "0x7"
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_DRAM_HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_L2_HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the L2 cache.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the LLC or other core with HITE/F/M.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_LLC_HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of cycles a core is stalled due to a store buffer being full.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.STORE_BUFFER_FULL",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x40"
},
{
"BriefDescription": "Counts the number of load ops retired that hit in DRAM.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x80"
},
{
"BriefDescription": "Counts the number of retired loads that hit in the L3 cache, in which a snoop was required and modified data was forwarded from another core.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HITM",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x20"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L1 data cache.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"UMask": "0x2",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of load uops retired that hit in the level 2 cache",
"Data_LA": "1"
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of load uops retired that miss in the L1 data cache.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x8"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L2 cache.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of load uops retired that miss in the L2 cache.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"UMask": "0x4",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x10"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L3 cache.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of load uops retired that miss in the level 3 cache"
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of load uops retired.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the total number of load uops retired.",
"SampleAfterValue": "200003",
"UMask": "0x81"
},
{
"BriefDescription": "Counts the number of store uops retired.",
"CollectPEBSRecord": "2",
"EventCode": "0xd1",
"Counter": "0,1,2,3",
"UMask": "0x8",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
"PublicDescription": "Counts the total number of store uops retired.",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of load uops retired that miss in the level 1 data cache",
"Data_LA": "1"
"UMask": "0x82"
},
{
"BriefDescription": "Counts the number of memory uops retired that were splits.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x43"
},
{
"BriefDescription": "Counts the number of retired split loads uops.",
"CollectPEBSRecord": "2",
"EventCode": "0xd1",
"Counter": "0,1,2,3",
"UMask": "0x10",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
"SampleAfterValue": "200003",
"BriefDescription": "Counts the number of load uops retired that miss in the level 2 cache",
"Data_LA": "1"
"UMask": "0x41"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to instruction cache misses.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ICACHE",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "1000003",
"UMask": "0x20"
}
]
\ No newline at end of file
[
{
"BriefDescription": "Counts the number of cycles the floating point divider is busy. Does not imply a stall waiting for the divider.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xcd",
"EventName": "CYCLES_DIV_BUSY.FPDIV",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of floating point divide uops retired (x87 and SSE, including x87 sqrt).",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.FPDIV",
"PEBS": "1",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "2000003",
"UMask": "0x8"
}
]
\ No newline at end of file
[
{
"BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
"CollectPEBSRecord": "2",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is not in the ICache (miss). The event strives to count on a cache line basis, so that multiple accesses which miss in a single cache line count as one ICACHE.MISS. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is not in the ICache.",
"EventCode": "0x80",
"Counter": "0,1,2,3",
"UMask": "0x2",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of BACLEARS due to a conditional jump.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.COND",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"BriefDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in a cache line and they do not hit in the ICache (miss)."
"UMask": "0x10"
},
{
"BriefDescription": "Counts the number of BACLEARS due to an indirect branch.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.INDIRECT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of BACLEARS due to a return branch.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.RETURN",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x8"
},
{
"BriefDescription": "Counts the number of BACLEARS due to a direct, unconditional jump.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.UNCOND",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of times a decode restriction reduces the decode throughput due to wrong instruction length prediction.",
"CollectPEBSRecord": "2",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line. The event strives to count on a cache line basis, so that multiple fetches to a single cache line count as one ICACHE.ACCESS. Specifically, the event counts when accesses from straight line code crosses the cache line boundary, or when a branch target is to a new line.",
"EventCode": "0x80",
"Counter": "0,1,2,3",
"UMask": "0x3",
"EventCode": "0xe9",
"EventName": "DECODE_RESTRICTION.PREDECODE_WRONG",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of requests to the instruction cache for one or more bytes of a cache line.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the total number of requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
"SampleAfterValue": "200003",
"UMask": "0x3"
},
{
"BriefDescription": "Counts the number of instruction cache hits.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of requests that hit in the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of instruction cache misses.",
"CollectPEBSRecord": "2",
"Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PDIR_COUNTER": "na",
"PEBScounters": "0,1,2,3",
"PublicDescription": "Counts the number of missed requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
"SampleAfterValue": "200003",
"BriefDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes cache Line."
"UMask": "0x2"
}
]
\ No newline at end of file
......@@ -50,13 +50,79 @@
"Unit": "iMC"
},
{
"BriefDescription": "Precharge due to read on page miss, write on page miss or PGT",
"BriefDescription": "DRAM Activate Count : All Activates",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x01",
"EventName": "UNC_M_ACT_COUNT.ALL",
"PerPkg": "1",
"PublicDescription": "DRAM Activate Count : All Activates : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x0B",
"Unit": "iMC"
},
{
"BriefDescription": "All DRAM CAS commands issued",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
"PublicDescription": "Counts the total number of DRAM CAS commands issued on this channel.",
"UMask": "0x3f",
"Unit": "iMC"
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.HIGH",
"PerPkg": "1",
"PublicDescription": "Number of DRAM Refreshes Issued : Counts the number of refreshes issued.",
"UMask": "0x04",
"Unit": "iMC"
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.OPPORTUNISTIC",
"PerPkg": "1",
"PublicDescription": "Number of DRAM Refreshes Issued : Counts the number of refreshes issued.",
"UMask": "0x01",
"Unit": "iMC"
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.PANIC",
"PerPkg": "1",
"PublicDescription": "Number of DRAM Refreshes Issued : Counts the number of refreshes issued.",
"UMask": "0x02",
"Unit": "iMC"
},
{
"BriefDescription": "Half clockticks for IMC",
"Counter": "FIXED",
"CounterType": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_M_HCLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Half clockticks for IMC",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Precharge commands.",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.ALL",
"PerPkg": "1",
"UMask": "0x1c",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x1C",
"Unit": "iMC"
},
{
......@@ -66,8 +132,92 @@
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.PGT",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Precharge due to page table : Counts the number of DRAM Precharge commands sent on this channel.",
"PublicDescription": "DRAM Precharge commands. : Precharge due to page table : Counts the number of DRAM Precharge commands sent on this channel. : Prechages from Page Table",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Allocations",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH0",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
"UMask": "0x01",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Allocations",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
"UMask": "0x02",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x80",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH0",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Occupancy : Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory.",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Occupancy",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x81",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Occupancy : Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Allocations",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH0",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Allocations : Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.",
"UMask": "0x01",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Allocations",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Allocations : Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.",
"UMask": "0x02",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x82",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH0",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Occupancy : Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA. The posted filter, on the other hand, provides information about how much queueing is actually happenning in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Occupancy",
"Counter": "0,1,2,3",
"CounterType": "PGMABLE",
"EventCode": "0x83",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Occupancy : Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA. The posted filter, on the other hand, provides information about how much queueing is actually happenning in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts.",
"Unit": "iMC"
}
]
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册