- 09 3月, 2023 2 次提交
-
-
由 Jakub Kicinski 提交于
Lorenzo points out that the generic CLI is broken for the netdev family. When I added the support for documentation of enums (and sparse enums) the client script was not updated. It expects the values in enum to be a list of names, now it can also be a dict (YAML object). Reported-by: NLorenzo Bianconi <lorenzo@kernel.org> Fixes: e4b48ed4 ("tools: ynl: add a completely generic client") Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
Move bulk of the EnumSet and EnumEntry code to shared code for reuse by cli. Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 08 3月, 2023 1 次提交
-
-
由 Jakub Kicinski 提交于
I was intending to make all the Netlink Spec code BSD-3-Clause to ease the adoption but it appears that: - I fumbled the uAPI and used "GPL WITH uAPI note" there - it gives people pause as they expect GPL in the kernel As suggested by Chuck re-license under dual. This gives us benefit of full BSD freedom while fulfilling the broad "kernel is under GPL" expectations. Link: https://lore.kernel.org/all/20230304120108.05dd44c5@kernel.org/ Link: https://lore.kernel.org/r/20230306200457.3903854-1-kuba@kernel.orgSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 07 3月, 2023 4 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
To pick up the changes in: 09519ec3 ("perf: Add perf_event_attr::config3") The patches for the tooling side will come later. This addresses this perf build warning: Warning: Kernel ABI header at 'tools/include/uapi/linux/perf_event.h' differs from latest version at 'include/uapi/linux/perf_event.h' diff -u tools/include/uapi/linux/perf_event.h include/uapi/linux/perf_event.h Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Rob Herring <robh@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/lkml/ZAZLYmDjWjSItWOq@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Lorenz Bauer 提交于
Add a regression test that ensures that a VAR pointing at a modifier which follows a PTR (or STRUCT or ARRAY) is resolved correctly by the datasec validator. Signed-off-by: NLorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/r/20230306112138.155352-3-lmb@isovalent.comSigned-off-by: NMartin KaFai Lau <martin.lau@kernel.org>
-
由 Alexander Lobakin 提交于
&xdp_buff and &xdp_frame are bound in a way that xdp_buff->data_hard_start == xdp_frame It's always the case and e.g. xdp_convert_buff_to_frame() relies on this. IOW, the following: for (u32 i = 0; i < 0xdead; i++) { xdpf = xdp_convert_buff_to_frame(&xdp); xdp_convert_frame_to_buff(xdpf, &xdp); } shouldn't ever modify @xdpf's contents or the pointer itself. However, "live packet" code wrongly treats &xdp_frame as part of its context placed *before* the data_hard_start. With such flow, data_hard_start is sizeof(*xdpf) off to the right and no longer points to the XDP frame. Instead of replacing `sizeof(ctx)` with `offsetof(ctx, xdpf)` in several places and praying that there are no more miscalcs left somewhere in the code, unionize ::frm with ::data in a flex array, so that both starts pointing to the actual data_hard_start and the XDP frame actually starts being a part of it, i.e. a part of the headroom, not the context. A nice side effect is that the maximum frame size for this mode gets increased by 40 bytes, as xdp_buff::frame_sz includes everything from data_hard_start (-> includes xdpf already) to the end of XDP/skb shared info. Also update %MAX_PKT_SIZE accordingly in the selftests code. Leave it hardcoded for 64 bit && 4k pages, it can be made more flexible later on. Minor: align `&head->data` with how `head->frm` is assigned for consistency. Minor #2: rename 'frm' to 'frame' in &xdp_page_head while at it for clarity. (was found while testing XDP traffic generator on ice, which calls xdp_convert_frame_to_buff() for each XDP frame) Fixes: b530e9e1 ("bpf: Add "live packet" mode for XDP in BPF_PROG_RUN") Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: NAlexander Lobakin <aleksander.lobakin@intel.com> Link: https://lore.kernel.org/r/20230224163607.2994755-1-aleksander.lobakin@intel.comSigned-off-by: NMartin KaFai Lau <martin.lau@kernel.org>
-
由 Arnaldo Carvalho de Melo 提交于
To pick the changes from: 8415a748 ("x86/cpu, kvm: Add support for CPUID_80000021_EAX") This only causes these perf files to be rebuilt: CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o And addresses these perf build warnings: Warning: Kernel ABI header at 'tools/arch/x86/include/asm/disabled-features.h' differs from latest version at 'arch/x86/include/asm/disabled-features.h' diff -u tools/arch/x86/include/asm/disabled-features.h arch/x86/include/asm/disabled-features.h Warning: Kernel ABI header at 'tools/arch/x86/include/asm/required-features.h' differs from latest version at 'arch/x86/include/asm/required-features.h' diff -u tools/arch/x86/include/asm/required-features.h arch/x86/include/asm/required-features.h Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/lkml/ZAYlS2XTJ5hRtss7@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 06 3月, 2023 1 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
To get the changes in: 3b688d7a ("vhost-vdpa: uAPI to resume the device") To pick up these changes and support them: $ tools/perf/trace/beauty/vhost_virtio_ioctl.sh > before $ cp ../linux/include/uapi/linux/vhost.h tools/include/uapi/linux/vhost.h $ tools/perf/trace/beauty/vhost_virtio_ioctl.sh > after $ diff -u before after --- before 2023-03-06 09:26:14.889251817 -0300 +++ after 2023-03-06 09:26:20.594406270 -0300 @@ -30,6 +30,7 @@ [0x77] = "VDPA_SET_CONFIG_CALL", [0x7C] = "VDPA_SET_GROUP_ASID", [0x7D] = "VDPA_SUSPEND", + [0x7E] = "VDPA_RESUME", }; static const char *vhost_virtio_ioctl_read_cmds[] = { [0x00] = "GET_FEATURES", $ For instance, see how those 'cmd' ioctl arguments get translated, now VDPA_RESUME will be as well: # perf trace -a -e ioctl --max-events=10 0.000 ( 0.011 ms): pipewire/2261 ioctl(fd: 60, cmd: SNDRV_PCM_HWSYNC, arg: 0x1) = 0 21.353 ( 0.014 ms): pipewire/2261 ioctl(fd: 60, cmd: SNDRV_PCM_HWSYNC, arg: 0x1) = 0 25.766 ( 0.014 ms): gnome-shell/2196 ioctl(fd: 14, cmd: DRM_I915_IRQ_WAIT, arg: 0x7ffe4a22c740) = 0 25.845 ( 0.034 ms): gnome-shel:cs0/2212 ioctl(fd: 14, cmd: DRM_I915_IRQ_EMIT, arg: 0x7fd43915dc70) = 0 25.916 ( 0.011 ms): gnome-shell/2196 ioctl(fd: 9, cmd: DRM_MODE_ADDFB2, arg: 0x7ffe4a22c8a0) = 0 25.941 ( 0.025 ms): gnome-shell/2196 ioctl(fd: 9, cmd: DRM_MODE_ATOMIC, arg: 0x7ffe4a22c840) = 0 32.915 ( 0.009 ms): gnome-shell/2196 ioctl(fd: 9, cmd: DRM_MODE_RMFB, arg: 0x7ffe4a22cf9c) = 0 42.522 ( 0.013 ms): gnome-shell/2196 ioctl(fd: 14, cmd: DRM_I915_IRQ_WAIT, arg: 0x7ffe4a22c740) = 0 42.579 ( 0.031 ms): gnome-shel:cs0/2212 ioctl(fd: 14, cmd: DRM_I915_IRQ_EMIT, arg: 0x7fd43915dc70) = 0 42.644 ( 0.010 ms): gnome-shell/2196 ioctl(fd: 9, cmd: DRM_MODE_ADDFB2, arg: 0x7ffe4a22c8a0) = 0 # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Sebastien Boeuf <sebastien.boeuf@intel.com> Link: https://lore.kernel.org/lkml/ZAXdCTecxSNwAoeK@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 04 3月, 2023 6 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
To pick up the changes in: e7862eda ("x86/cpu: Support AMD Automatic IBRS") 0125acda ("x86/bugs: Reset speculation control settings on init") 38aaf921 ("perf/x86: Add Meteor Lake support") 5b6fac3f ("x86/resctrl: Detect and configure Slow Memory Bandwidth Allocation") dc2a3e85 ("x86/resctrl: Add interface to read mbm_total_bytes_config") Addressing these tools/perf build warnings: diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h' That makes the beautification scripts to pick some new entries: $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > before $ cp arch/x86/include/asm/msr-index.h tools/arch/x86/include/asm/msr-index.h $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > after $ diff -u before after --- before 2023-03-03 18:26:51.766923522 -0300 +++ after 2023-03-03 18:27:09.987415481 -0300 @@ -267,9 +267,11 @@ [0xc000010e - x86_64_specific_MSRs_offset] = "AMD64_LBR_SELECT", [0xc000010f - x86_64_specific_MSRs_offset] = "AMD_DBG_EXTN_CFG", [0xc0000200 - x86_64_specific_MSRs_offset] = "IA32_MBA_BW_BASE", + [0xc0000280 - x86_64_specific_MSRs_offset] = "IA32_SMBA_BW_BASE", [0xc0000300 - x86_64_specific_MSRs_offset] = "AMD64_PERF_CNTR_GLOBAL_STATUS", [0xc0000301 - x86_64_specific_MSRs_offset] = "AMD64_PERF_CNTR_GLOBAL_CTL", [0xc0000302 - x86_64_specific_MSRs_offset] = "AMD64_PERF_CNTR_GLOBAL_STATUS_CLR", + [0xc0000400 - x86_64_specific_MSRs_offset] = "IA32_EVT_CFG_BASE", }; #define x86_AMD_V_KVM_MSRs_offset 0xc0010000 $ Now one can trace systemwide asking to see backtraces to where that MSR is being read/written, see this example with a previous update: # perf trace -e msr:*_msr/max-stack=32/ --filter="msr>=IA32_U_CET && msr<=IA32_INT_SSP_TAB" ^C# If we use -v (verbose mode) we can see what it does behind the scenes: # perf trace -v -e msr:*_msr/max-stack=32/ --filter="msr>=IA32_U_CET && msr<=IA32_INT_SSP_TAB" Using CPUID AuthenticAMD-25-21-0 0x6a0 0x6a8 New filter for msr:read_msr: (msr>=0x6a0 && msr<=0x6a8) && (common_pid != 597499 && common_pid != 3313) 0x6a0 0x6a8 New filter for msr:write_msr: (msr>=0x6a0 && msr<=0x6a8) && (common_pid != 597499 && common_pid != 3313) mmap size 528384B ^C# Example with a frequent msr: # perf trace -v -e msr:*_msr/max-stack=32/ --filter="msr==IA32_SPEC_CTRL" --max-events 2 Using CPUID AuthenticAMD-25-21-0 0x48 New filter for msr:read_msr: (msr==0x48) && (common_pid != 2612129 && common_pid != 3841) 0x48 New filter for msr:write_msr: (msr==0x48) && (common_pid != 2612129 && common_pid != 3841) mmap size 528384B Looking at the vmlinux_path (8 entries long) symsrc__init: build id mismatch for vmlinux. Using /proc/kcore for kernel data Using /proc/kallsyms for symbols 0.000 Timer/2525383 msr:write_msr(msr: IA32_SPEC_CTRL, val: 6) do_trace_write_msr ([kernel.kallsyms]) do_trace_write_msr ([kernel.kallsyms]) __switch_to_xtra ([kernel.kallsyms]) __switch_to ([kernel.kallsyms]) __schedule ([kernel.kallsyms]) schedule ([kernel.kallsyms]) futex_wait_queue_me ([kernel.kallsyms]) futex_wait ([kernel.kallsyms]) do_futex ([kernel.kallsyms]) __x64_sys_futex ([kernel.kallsyms]) do_syscall_64 ([kernel.kallsyms]) entry_SYSCALL_64_after_hwframe ([kernel.kallsyms]) __futex_abstimed_wait_common64 (/usr/lib64/libpthread-2.33.so) 0.030 :0/0 msr:write_msr(msr: IA32_SPEC_CTRL, val: 2) do_trace_write_msr ([kernel.kallsyms]) do_trace_write_msr ([kernel.kallsyms]) __switch_to_xtra ([kernel.kallsyms]) __switch_to ([kernel.kallsyms]) __schedule ([kernel.kallsyms]) schedule_idle ([kernel.kallsyms]) do_idle ([kernel.kallsyms]) cpu_startup_entry ([kernel.kallsyms]) secondary_startup_64_no_verify ([kernel.kallsyms]) # Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Babu Moger <babu.moger@amd.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Breno Leitao <leitao@debian.org> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Kim Phillips <kim.phillips@amd.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nikunj A Dadhania <nikunj@amd.com> Link: https://lore.kernel.org/lkml/ZAJoaZ41+rU5H0vL@kernel.org [ I had published the perf-tools branch before with the sync with ] [ 8c29f016 ("x86/sev: Add SEV-SNP guest feature negotiation support") ] [ I removed it from this new sync ] Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To pick up the changes in: 89b0e7de ("KVM: arm64: nv: Introduce nested virtualization VCPU feature") 14329b82 ("KVM: x86/pmu: Introduce masked events to the pmu event filter") 6213b701 ("KVM: x86: Replace 0-length arrays with flexible arrays") 3fd49805 ("KVM: s390: Extend MEM_OP ioctl by storage key checked cmpxchg") 14329b82 ("KVM: x86/pmu: Introduce masked events to the pmu event filter") That don't change functionality in tools/perf, as no new ioctl is added for the 'perf trace' scripts to harvest. This addresses these perf build warnings: Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h' diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/kvm.h' differs from latest version at 'arch/x86/include/uapi/asm/kvm.h' diff -u tools/arch/x86/include/uapi/asm/kvm.h arch/x86/include/uapi/asm/kvm.h Warning: Kernel ABI header at 'tools/arch/arm64/include/uapi/asm/kvm.h' differs from latest version at 'arch/arm64/include/uapi/asm/kvm.h' diff -u tools/arch/arm64/include/uapi/asm/kvm.h arch/arm64/include/uapi/asm/kvm.h Cc: Aaron Lewis <aaronlewis@google.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Ian Rogers <irogers@google.com> Cc: Janis Schoetterl-Glausch <scgl@linux.ibm.com> Cc: Janosch Frank <frankja@linux.ibm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kees Kook <keescook@chromium.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Cc: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/lkml/ZAJlg7%2FfWDVGX0F3@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To pick up the changes in: 6fd73538 ("mm/memfd: add F_SEAL_EXEC") That doesn't add or change any perf tools functionality, only addresses these build warnings: Warning: Kernel ABI header at 'tools/include/uapi/linux/fcntl.h' differs from latest version at 'include/uapi/linux/fcntl.h' diff -u tools/include/uapi/linux/fcntl.h include/uapi/linux/fcntl.h Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To pick up the changes in this cset: cbdb1f16 ("vdso/bits.h: Add BIT_ULL() for the sake of consistency") That just causes perf to rebuild, the macro included doesn't clash with anything in tools/{perf,objtool,bpf}. This addresses this perf build warning: Warning: Kernel ABI header at 'tools/include/linux/bits.h' differs from latest version at 'include/linux/bits.h' diff -u tools/include/linux/bits.h include/linux/bits.h Warning: Kernel ABI header at 'tools/include/vdso/bits.h' differs from latest version at 'include/vdso/bits.h' diff -u tools/include/vdso/bits.h include/vdso/bits.h Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To pick new prctl options introduced in: b507808e ("mm: implement memory-deny-write-execute as a prctl") That results in: $ diff -u tools/include/uapi/linux/prctl.h include/uapi/linux/prctl.h --- tools/include/uapi/linux/prctl.h 2022-06-20 17:54:43.884515663 -0300 +++ include/uapi/linux/prctl.h 2023-03-03 11:18:51.090923569 -0300 @@ -281,6 +281,12 @@ # define PR_SME_VL_LEN_MASK 0xffff # define PR_SME_VL_INHERIT (1 << 17) /* inherit across exec */ +/* Memory deny write / execute */ +#define PR_SET_MDWE 65 +# define PR_MDWE_REFUSE_EXEC_GAIN 1 + +#define PR_GET_MDWE 66 + #define PR_SET_VMA 0x53564d41 # define PR_SET_VMA_ANON_NAME 0 $ tools/perf/trace/beauty/prctl_option.sh > before $ cp include/uapi/linux/prctl.h tools/include/uapi/linux/prctl.h $ tools/perf/trace/beauty/prctl_option.sh > after $ diff -u before after --- before 2023-03-03 11:47:43.320013146 -0300 +++ after 2023-03-03 11:47:50.937216229 -0300 @@ -59,6 +59,8 @@ [62] = "SCHED_CORE", [63] = "SME_SET_VL", [64] = "SME_GET_VL", + [65] = "SET_MDWE", + [66] = "GET_MDWE", }; static const char *prctl_set_mm_options[] = { [1] = "START_CODE", $ Now users can do: # perf trace -e syscalls:sys_enter_prctl --filter "option==SET_MDWE||option==GET_MDWE" ^C# # trace -v -e syscalls:sys_enter_prctl --filter "option==SET_MDWE||option==GET_MDWE" New filter for syscalls:sys_enter_prctl: (option==65||option==66) && (common_pid != 5519 && common_pid != 3404) ^C# And when these prctl options appears in a session, they will be translated to the corresponding string. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Joey Gouly <joey.gouly@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/lkml/ZAI%2FAoPXb%2Fsxz1%2Fm@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
We also continue with SYM_TYPED_FUNC_START() in util/include/linux/linkage.h and with an exception in tools/perf/check_headers.sh's diff check to ignore the include cfi_types.h line when checking if the kernel original files drifted from the copies we carry. This is to get the changes from: 69d4c0d3 ("entry, kasan, x86: Disallow overriding mem*() functions") That addresses these perf tools build warning: Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S' diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S' diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/lkml/ZAH%2FjsioJXGIOrkf@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 03 3月, 2023 4 次提交
-
-
由 Jakub Kicinski 提交于
Pretty much all families use value: 1 or reserve as unspec the first entry in attribute set and the first operation. Make this the default. Update documentation (the doc for values of operations just refers back to doc for attrs so updating only attrs). Signed-off-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
To avoid having to repeat the entire definition of an attribute (including the value) use the Attr object from the original set. In fact this is already the documented expectation. Fixes: be5bea1c ("net: add basic C code generators for Netlink") Signed-off-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Changbin Du 提交于
When creating counters with initial delay configured, the enable_on_exec field is not set. So we need to enable the counters later. The problem is, when a workload is specified the target__none() is true. So we also need to check stat_config.initial_delay. In this change, we add a new field 'initial_delay' for struct target which could be shared by other subcommands. And define target__enable_on_exec() which returns whether enable_on_exec should be set on normal cases. Before this fix the event is not counted: $ ./perf stat -e instructions -D 100 sleep 2 Events disabled Events enabled Performance counter stats for 'sleep 2': <not counted> instructions 1.901661124 seconds time elapsed 0.001602000 seconds user 0.000000000 seconds sys After fix it works: $ ./perf stat -e instructions -D 100 sleep 2 Events disabled Events enabled Performance counter stats for 'sleep 2': 404,214 instructions 1.901743475 seconds time elapsed 0.001617000 seconds user 0.000000000 seconds sys Fixes: c587e77e ("perf stat: Do not delay the workload with --delay") Signed-off-by: NChangbin Du <changbin.du@huawei.com> Acked-by: NNamhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Hui Wang <hw.huiwang@huawei.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20230302031146.2801588-2-changbin.du@huawei.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Arnaldo Carvalho de Melo 提交于
To pick the changes in: 8c29f016 ("x86/sev: Add SEV-SNP guest feature negotiation support") That triggers: CC /tmp/build/perf-tools/arch/x86/util/kvm-stat.o CC /tmp/build/perf-tools/util/header.o LD /tmp/build/perf-tools/arch/x86/util/perf-in.o LD /tmp/build/perf-tools/arch/x86/perf-in.o LD /tmp/build/perf-tools/arch/perf-in.o LD /tmp/build/perf-tools/util/perf-in.o LD /tmp/build/perf-tools/perf-in.o LINK /tmp/build/perf-tools/perf But this time causes no changes in tooling results, as the introduced SVM_VMGEXIT_TERM_REQUEST exit reason wasn't added to SVM_EXIT_REASONS, that is used in kvm-stat.c. And addresses this perf build warning: Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/svm.h' differs from latest version at 'arch/x86/include/uapi/asm/svm.h' diff -u tools/arch/x86/include/uapi/asm/svm.h arch/x86/include/uapi/asm/svm.h Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Nikunj A Dadhania <nikunj@amd.com> Link: http://lore.kernel.org/lkml/Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 02 3月, 2023 2 次提交
-
-
由 Linus Torvalds 提交于
Back in 2008 we extended the capability bits from 32 to 64, and we did it by extending the single 32-bit capability word from one word to an array of two words. It was then obfuscated by hiding the "2" behind two macro expansions, with the reasoning being that maybe it gets extended further some day. That reasoning may have been valid at the time, but the last thing we want to do is to extend the capability set any more. And the array of values not only causes source code oddities (with loops to deal with it), but also results in worse code generation. It's a lose-lose situation. So just change the 'u32[2]' into a 'u64' and be done with it. We still have to deal with the fact that the user space interface is designed around an array of these 32-bit values, but that was the case before too, since the array layouts were different (ie user space doesn't use an array of 32-bit values for individual capability masks, but an array of 32-bit slices of multiple masks). So that marshalling of data is actually simplified too, even if it does remain somewhat obscure and odd. This was all triggered by my reaction to the new "cap_isidentical()" introduced recently. By just using a saner data structure, it went from unsigned __capi; CAP_FOR_EACH_U32(__capi) { if (a.cap[__capi] != b.cap[__capi]) return false; } return true; to just being return a.val == b.val; instead. Which is rather more obvious both to humans and to compilers. Cc: Mateusz Guzik <mjguzik@gmail.com> Cc: Casey Schaufler <casey@schaufler-ca.com> Cc: Serge Hallyn <serge@hallyn.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Paul Moore <paul@paul-moore.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hangbin Liu 提交于
The test_local_dnat_portonly() function initiates the client-side as soon as it sets the listening side to the background. This could lead to a race condition where the server may not be ready to listen. To ensure that the server-side is up and running before initiating the client-side, a delay is introduced to the test_local_dnat_portonly() function. Before the fix: # ./nft_nat.sh PASS: netns routing/connectivity: ns0-rthlYrBU can reach ns1-rthlYrBU and ns2-rthlYrBU PASS: ping to ns1-rthlYrBU was ip NATted to ns2-rthlYrBU PASS: ping to ns1-rthlYrBU OK after ip nat output chain flush PASS: ipv6 ping to ns1-rthlYrBU was ip6 NATted to ns2-rthlYrBU 2023/02/27 04:11:03 socat[6055] E connect(5, AF=2 10.0.1.99:2000, 16): Connection refused ERROR: inet port rewrite After the fix: # ./nft_nat.sh PASS: netns routing/connectivity: ns0-9sPJV6JJ can reach ns1-9sPJV6JJ and ns2-9sPJV6JJ PASS: ping to ns1-9sPJV6JJ was ip NATted to ns2-9sPJV6JJ PASS: ping to ns1-9sPJV6JJ OK after ip nat output chain flush PASS: ipv6 ping to ns1-9sPJV6JJ was ip6 NATted to ns2-9sPJV6JJ PASS: inet port rewrite without l3 address Fixes: 282e5f8f ("netfilter: nat: really support inet nat without l3 address") Signed-off-by: NHangbin Liu <liuhangbin@gmail.com> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
- 25 2月, 2023 6 次提交
-
-
由 Qing Zhang 提交于
Before: [5] Kprobe event string type argument [UNTESTED] [7] Kprobe event argument syntax [UNTESTED] After: [5] Kprobe event string type argument [PASS] [7] Kprobe event argument syntax [PASS] Signed-off-by: NQing Zhang <zhangqing@loongson.cn> Signed-off-by: NHuacai Chen <chenhuacai@loongson.cn>
-
由 Huacai Chen 提交于
BPF for LoongArch is supported now, add the selftesting support in seccomp_bpf.c. Signed-off-by: NTiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: NHuacai Chen <chenhuacai@loongson.cn>
-
由 Huacai Chen 提交于
We will add tools support for LoongArch (bpf, perf, objtool, etc.), add build infrastructure and common headers for preparation. Signed-off-by: NHuacai Chen <chenhuacai@loongson.cn>
-
由 Jakub Kicinski 提交于
Python will generate its customary cache when running ynl scripts: ?? tools/net/ynl/lib/__pycache__/ Reported-by: NChuck Lever III <chuck.lever@oracle.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
traceback.print_exception() seems tricky to call, we're missing some argument, so re-raise instead. Reported-by: NChuck Lever III <chuck.lever@oracle.com> Fixes: 3aacf828 ("tools: ynl: add an object hierarchy to represent parsed spec") Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
Chuck run into an issue with a single-element attr-set which only has an attr with value of 0. The search for max attr in a struct records attrs with value larger than 0 only (max_val is set to 0 at the start). Adjust the comparison, alternatively max_val could be init'ed to -1. Somehow picking the last attr of a value seems like a good idea in general. Reported-by: NChuck Lever III <chuck.lever@oracle.com> Fixes: be5bea1c ("net: add basic C code generators for Netlink") Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 24 2月, 2023 1 次提交
-
-
由 Tariq Toukan 提交于
Fix a repeated copy/paste typo. Fixes: d3d854fd ("netdev-genl: create a simple family for netdev stuff") Signed-off-by: NTariq Toukan <tariqt@nvidia.com> Acked-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 2月, 2023 13 次提交
-
-
由 Benjamin Tissoires 提交于
Now that CONFIG_HID_BPF is not automatically implied by HID, we need to set it properly in the selftests config. Signed-off-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Ian Rogers 提交于
Commas may appear in events like: cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ which causes the count of commas to see more items than expected. Switch to counting the entries in the dictionary, which is 1 more than the number of commas. Signed-off-by: NIan Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Claire Jensen <cjense@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sumanth Korikkar <sumanthk@linux.ibm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20230223071818.329671-2-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Ian Rogers 提交于
Commas may appear in events like: cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/ which causes the commachecker to see more fields than expected. Use @ as the CSV separator to avoid this. Signed-off-by: NIan Rogers <irogers@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Cc: Claire Jensen <cjense@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sumanth Korikkar <sumanthk@linux.ibm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20230223071818.329671-1-irogers@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Namhyung Kim 提交于
When MMAP2 has the PERF_RECORD_MISC_MMAP_BUILD_ID flag, it means the record already has the build-id info. So it marks the DSO as hit, to skip if the same DSO is not processed if it happens to miss the build-id later. But it missed to copy the MMAP2 record itself so it'd fail to symbolize samples for those regions. For example, the following generates 249 MMAP2 events. $ perf record --buildid-mmap -o- true | perf report --stat -i- | grep MMAP2 MMAP2 events: 249 (86.8%) Adding perf inject should not change the number of events like this $ perf record --buildid-mmap -o- true | perf inject -b | \ > perf report --stat -i- | grep MMAP2 MMAP2 events: 249 (86.5%) But when --buildid-all is used, it eats most of the MMAP2 events. $ perf record --buildid-mmap -o- true | perf inject -b --buildid-all | \ > perf report --stat -i- | grep MMAP2 MMAP2 events: 1 ( 2.5%) With this patch, it shows the original number now. $ perf record --buildid-mmap -o- true | perf inject -b --buildid-all | \ > perf report --stat -i- | grep MMAP2 MMAP2 events: 249 (86.5%) Committer testing: Before: $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (36.2%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (36.2%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b --buildid-all | perf report --stat -i- | grep MMAP2 MMAP2 events: 2 ( 1.9%) $ After: $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (29.3%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (34.3%) $ perf record --buildid-mmap -o- perf stat --null sleep 1 2> /dev/null | perf inject -b --buildid-all | perf report --stat -i- | grep MMAP2 MMAP2 events: 58 (38.4%) $ Fixes: f7fc0d1c ("perf inject: Do not inject BUILD_ID record if MMAP2 has it") Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Tested-by: NArnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230223070155.54251-1-namhyung@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Lu Wei 提交于
Add tests to check whether the total fib info length is calculated corretly in route notify process. Signed-off-by: NLu Wei <luwei32@huawei.com> Reviewed-by: NDavid Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20230222083629.335683-3-luwei32@huawei.comSigned-off-by: NPaolo Abeni <pabeni@redhat.com>
-
由 Josh Poimboeuf 提交于
There have been some recently reported ORC unwinder warnings like: WARNING: can't access registers at entry_SYSCALL_64_after_hwframe+0x63/0xcd WARNING: stack going in the wrong direction? at __sys_setsockopt+0x2c6/0x5b0 net/socket.c:2271 And a KASAN warning: BUG: KASAN: stack-out-of-bounds in unwind_next_frame (arch/x86/include/asm/ptrace.h:136 arch/x86/kernel/unwind_orc.c:455) It turns out the 'signal' bit isn't getting propagated from the unwind hints to the ORC entries, making the unwinder confused at times. Fixes: ffb1b4a4 ("x86/unwind/orc: Add 'signal' field to ORC metadata") Reported-by: Nkernel test robot <oliver.sang@intel.com> Reported-by: NDmitry Vyukov <dvyukov@google.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/97eef9db60cd86d376a9a40d49d77bb67a8f6526.1676579666.git.jpoimboe@kernel.org
-
由 Peter Zijlstra 提交于
Replace the instruction::list by allocating instructions in arrays of 256 entries and stringing them together by (amortized) find_insn(). This shrinks instruction by 16 bytes and brings it down to 128. struct instruction { - struct list_head list; /* 0 16 */ - struct hlist_node hash; /* 16 16 */ - struct list_head call_node; /* 32 16 */ - struct section * sec; /* 48 8 */ - long unsigned int offset; /* 56 8 */ - /* --- cacheline 1 boundary (64 bytes) --- */ - long unsigned int immediate; /* 64 8 */ - unsigned int len; /* 72 4 */ - u8 type; /* 76 1 */ - - /* Bitfield combined with previous fields */ + struct hlist_node hash; /* 0 16 */ + struct list_head call_node; /* 16 16 */ + struct section * sec; /* 32 8 */ + long unsigned int offset; /* 40 8 */ + long unsigned int immediate; /* 48 8 */ + u8 len; /* 56 1 */ + u8 prev_len; /* 57 1 */ + u8 type; /* 58 1 */ + s8 instr; /* 59 1 */ + u32 idx:8; /* 60: 0 4 */ + u32 dead_end:1; /* 60: 8 4 */ + u32 ignore:1; /* 60: 9 4 */ + u32 ignore_alts:1; /* 60:10 4 */ + u32 hint:1; /* 60:11 4 */ + u32 save:1; /* 60:12 4 */ + u32 restore:1; /* 60:13 4 */ + u32 retpoline_safe:1; /* 60:14 4 */ + u32 noendbr:1; /* 60:15 4 */ + u32 entry:1; /* 60:16 4 */ + u32 visited:4; /* 60:17 4 */ + u32 no_reloc:1; /* 60:21 4 */ - u16 dead_end:1; /* 76: 8 2 */ - u16 ignore:1; /* 76: 9 2 */ - u16 ignore_alts:1; /* 76:10 2 */ - u16 hint:1; /* 76:11 2 */ - u16 save:1; /* 76:12 2 */ - u16 restore:1; /* 76:13 2 */ - u16 retpoline_safe:1; /* 76:14 2 */ - u16 noendbr:1; /* 76:15 2 */ - u16 entry:1; /* 78: 0 2 */ - u16 visited:4; /* 78: 1 2 */ - u16 no_reloc:1; /* 78: 5 2 */ + /* XXX 10 bits hole, try to pack */ - /* XXX 2 bits hole, try to pack */ - /* Bitfield combined with next fields */ - - s8 instr; /* 79 1 */ - struct alt_group * alt_group; /* 80 8 */ - struct instruction * jump_dest; /* 88 8 */ - struct instruction * first_jump_src; /* 96 8 */ + /* --- cacheline 1 boundary (64 bytes) --- */ + struct alt_group * alt_group; /* 64 8 */ + struct instruction * jump_dest; /* 72 8 */ + struct instruction * first_jump_src; /* 80 8 */ union { - struct symbol * _call_dest; /* 104 8 */ - struct reloc * _jump_table; /* 104 8 */ - }; /* 104 8 */ - struct alternative * alts; /* 112 8 */ - struct symbol * sym; /* 120 8 */ - /* --- cacheline 2 boundary (128 bytes) --- */ - struct stack_op * stack_ops; /* 128 8 */ - struct cfi_state * cfi; /* 136 8 */ + struct symbol * _call_dest; /* 88 8 */ + struct reloc * _jump_table; /* 88 8 */ + }; /* 88 8 */ + struct alternative * alts; /* 96 8 */ + struct symbol * sym; /* 104 8 */ + struct stack_op * stack_ops; /* 112 8 */ + struct cfi_state * cfi; /* 120 8 */ - /* size: 144, cachelines: 3, members: 28 */ - /* sum members: 142 */ - /* sum bitfield members: 14 bits, bit holes: 1, sum bit holes: 2 bits */ - /* last cacheline: 16 bytes */ + /* size: 128, cachelines: 2, members: 29 */ + /* sum members: 124 */ + /* sum bitfield members: 22 bits, bit holes: 1, sum bit holes: 10 bits */ }; pre: 5:38.18 real, 213.25 user, 124.90 sys, 23449040 mem post: 5:03.34 real, 210.75 user, 88.80 sys, 20241232 mem Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.851307606@infradead.org
-
由 Peter Zijlstra 提交于
Things like ALTERNATIVE_{2,3}() generate multiple alternatives on the same place, objtool would override the first orig_alt_group with the second (or third), failing to check the CFI among all the different variants. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.711471461@infradead.org
-
由 Peter Zijlstra 提交于
The instruction call_dest and jump_table members can never be used at the same time, their usage depends on type. struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ long unsigned int immediate; /* 64 8 */ unsigned int len; /* 72 4 */ u8 type; /* 76 1 */ /* Bitfield combined with previous fields */ u16 dead_end:1; /* 76: 8 2 */ u16 ignore:1; /* 76: 9 2 */ u16 ignore_alts:1; /* 76:10 2 */ u16 hint:1; /* 76:11 2 */ u16 save:1; /* 76:12 2 */ u16 restore:1; /* 76:13 2 */ u16 retpoline_safe:1; /* 76:14 2 */ u16 noendbr:1; /* 76:15 2 */ u16 entry:1; /* 78: 0 2 */ u16 visited:4; /* 78: 1 2 */ u16 no_reloc:1; /* 78: 5 2 */ /* XXX 2 bits hole, try to pack */ /* Bitfield combined with next fields */ s8 instr; /* 79 1 */ struct alt_group * alt_group; /* 80 8 */ - struct symbol * call_dest; /* 88 8 */ - struct instruction * jump_dest; /* 96 8 */ - struct instruction * first_jump_src; /* 104 8 */ - struct reloc * jump_table; /* 112 8 */ - struct alternative * alts; /* 120 8 */ + struct instruction * jump_dest; /* 88 8 */ + struct instruction * first_jump_src; /* 96 8 */ + union { + struct symbol * _call_dest; /* 104 8 */ + struct reloc * _jump_table; /* 104 8 */ + }; /* 104 8 */ + struct alternative * alts; /* 112 8 */ + struct symbol * sym; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ - struct symbol * sym; /* 128 8 */ - struct stack_op * stack_ops; /* 136 8 */ - struct cfi_state * cfi; /* 144 8 */ + struct stack_op * stack_ops; /* 128 8 */ + struct cfi_state * cfi; /* 136 8 */ - /* size: 152, cachelines: 3, members: 29 */ - /* sum members: 150 */ + /* size: 144, cachelines: 3, members: 28 */ + /* sum members: 142 */ /* sum bitfield members: 14 bits, bit holes: 1, sum bit holes: 2 bits */ - /* last cacheline: 24 bytes */ + /* last cacheline: 16 bytes */ }; pre: 5:39.35 real, 215.58 user, 123.69 sys, 23448736 mem post: 5:38.18 real, 213.25 user, 124.90 sys, 23449040 mem Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.640914454@infradead.org
-
由 Peter Zijlstra 提交于
Instead of caching the reloc for each instruction, only keep a negative cache of not having a reloc (by far the most common case). struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ long unsigned int immediate; /* 64 8 */ unsigned int len; /* 72 4 */ u8 type; /* 76 1 */ /* Bitfield combined with previous fields */ u16 dead_end:1; /* 76: 8 2 */ u16 ignore:1; /* 76: 9 2 */ u16 ignore_alts:1; /* 76:10 2 */ u16 hint:1; /* 76:11 2 */ u16 save:1; /* 76:12 2 */ u16 restore:1; /* 76:13 2 */ u16 retpoline_safe:1; /* 76:14 2 */ u16 noendbr:1; /* 76:15 2 */ u16 entry:1; /* 78: 0 2 */ u16 visited:4; /* 78: 1 2 */ + u16 no_reloc:1; /* 78: 5 2 */ - /* XXX 3 bits hole, try to pack */ + /* XXX 2 bits hole, try to pack */ /* Bitfield combined with next fields */ s8 instr; /* 79 1 */ struct alt_group * alt_group; /* 80 8 */ struct symbol * call_dest; /* 88 8 */ struct instruction * jump_dest; /* 96 8 */ struct instruction * first_jump_src; /* 104 8 */ struct reloc * jump_table; /* 112 8 */ - struct reloc * reloc; /* 120 8 */ + struct alternative * alts; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ - struct alternative * alts; /* 128 8 */ - struct symbol * sym; /* 136 8 */ - struct stack_op * stack_ops; /* 144 8 */ - struct cfi_state * cfi; /* 152 8 */ + struct symbol * sym; /* 128 8 */ + struct stack_op * stack_ops; /* 136 8 */ + struct cfi_state * cfi; /* 144 8 */ - /* size: 160, cachelines: 3, members: 29 */ - /* sum members: 158 */ - /* sum bitfield members: 13 bits, bit holes: 1, sum bit holes: 3 bits */ - /* last cacheline: 32 bytes */ + /* size: 152, cachelines: 3, members: 29 */ + /* sum members: 150 */ + /* sum bitfield members: 14 bits, bit holes: 1, sum bit holes: 2 bits */ + /* last cacheline: 24 bytes */ }; pre: 5:48.89 real, 220.96 user, 127.55 sys, 24834672 mem post: 5:39.35 real, 215.58 user, 123.69 sys, 23448736 mem Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.572145269@infradead.org
-
由 Peter Zijlstra 提交于
Since we don't have that many types in enum insn_type, force it into a u8 and re-arrange member to get rid of the holes, saves another 8 bytes. struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ - unsigned int len; /* 64 4 */ - enum insn_type type; /* 68 4 */ - long unsigned int immediate; /* 72 8 */ - u16 dead_end:1; /* 80: 0 2 */ - u16 ignore:1; /* 80: 1 2 */ - u16 ignore_alts:1; /* 80: 2 2 */ - u16 hint:1; /* 80: 3 2 */ - u16 save:1; /* 80: 4 2 */ - u16 restore:1; /* 80: 5 2 */ - u16 retpoline_safe:1; /* 80: 6 2 */ - u16 noendbr:1; /* 80: 7 2 */ - u16 entry:1; /* 80: 8 2 */ + long unsigned int immediate; /* 64 8 */ + unsigned int len; /* 72 4 */ + u8 type; /* 76 1 */ - /* XXX 7 bits hole, try to pack */ + /* Bitfield combined with previous fields */ - s8 instr; /* 82 1 */ - u8 visited; /* 83 1 */ + u16 dead_end:1; /* 76: 8 2 */ + u16 ignore:1; /* 76: 9 2 */ + u16 ignore_alts:1; /* 76:10 2 */ + u16 hint:1; /* 76:11 2 */ + u16 save:1; /* 76:12 2 */ + u16 restore:1; /* 76:13 2 */ + u16 retpoline_safe:1; /* 76:14 2 */ + u16 noendbr:1; /* 76:15 2 */ + u16 entry:1; /* 78: 0 2 */ + u16 visited:4; /* 78: 1 2 */ - /* XXX 4 bytes hole, try to pack */ + /* XXX 3 bits hole, try to pack */ + /* Bitfield combined with next fields */ - struct alt_group * alt_group; /* 88 8 */ - struct symbol * call_dest; /* 96 8 */ - struct instruction * jump_dest; /* 104 8 */ - struct instruction * first_jump_src; /* 112 8 */ - struct reloc * jump_table; /* 120 8 */ + s8 instr; /* 79 1 */ + struct alt_group * alt_group; /* 80 8 */ + struct symbol * call_dest; /* 88 8 */ + struct instruction * jump_dest; /* 96 8 */ + struct instruction * first_jump_src; /* 104 8 */ + struct reloc * jump_table; /* 112 8 */ + struct reloc * reloc; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ - struct reloc * reloc; /* 128 8 */ - struct alternative * alts; /* 136 8 */ - struct symbol * sym; /* 144 8 */ - struct stack_op * stack_ops; /* 152 8 */ - struct cfi_state * cfi; /* 160 8 */ + struct alternative * alts; /* 128 8 */ + struct symbol * sym; /* 136 8 */ + struct stack_op * stack_ops; /* 144 8 */ + struct cfi_state * cfi; /* 152 8 */ - /* size: 168, cachelines: 3, members: 29 */ - /* sum members: 162, holes: 1, sum holes: 4 */ - /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 40 bytes */ + /* size: 160, cachelines: 3, members: 29 */ + /* sum members: 158 */ + /* sum bitfield members: 13 bits, bit holes: 1, sum bit holes: 3 bits */ + /* last cacheline: 32 bytes */ }; pre: 5:48.86 real, 220.30 user, 128.34 sys, 24834672 mem post: 5:48.89 real, 220.96 user, 127.55 sys, 24834672 mem Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.501847188@infradead.org
-
由 Peter Zijlstra 提交于
struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ unsigned int len; /* 64 4 */ enum insn_type type; /* 68 4 */ long unsigned int immediate; /* 72 8 */ u16 dead_end:1; /* 80: 0 2 */ u16 ignore:1; /* 80: 1 2 */ u16 ignore_alts:1; /* 80: 2 2 */ u16 hint:1; /* 80: 3 2 */ u16 save:1; /* 80: 4 2 */ u16 restore:1; /* 80: 5 2 */ u16 retpoline_safe:1; /* 80: 6 2 */ u16 noendbr:1; /* 80: 7 2 */ u16 entry:1; /* 80: 8 2 */ /* XXX 7 bits hole, try to pack */ s8 instr; /* 82 1 */ u8 visited; /* 83 1 */ /* XXX 4 bytes hole, try to pack */ struct alt_group * alt_group; /* 88 8 */ struct symbol * call_dest; /* 96 8 */ struct instruction * jump_dest; /* 104 8 */ struct instruction * first_jump_src; /* 112 8 */ struct reloc * jump_table; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ struct reloc * reloc; /* 128 8 */ - struct list_head alts; /* 136 16 */ - struct symbol * sym; /* 152 8 */ - struct stack_op * stack_ops; /* 160 8 */ - struct cfi_state * cfi; /* 168 8 */ + struct alternative * alts; /* 136 8 */ + struct symbol * sym; /* 144 8 */ + struct stack_op * stack_ops; /* 152 8 */ + struct cfi_state * cfi; /* 160 8 */ - /* size: 176, cachelines: 3, members: 29 */ - /* sum members: 170, holes: 1, sum holes: 4 */ + /* size: 168, cachelines: 3, members: 29 */ + /* sum members: 162, holes: 1, sum holes: 4 */ /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 48 bytes */ + /* last cacheline: 40 bytes */ }; pre: 5:58.50 real, 229.64 user, 128.65 sys, 26221520 mem post: 5:48.86 real, 220.30 user, 128.34 sys, 24834672 mem Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.430556498@infradead.org
-
由 Peter Zijlstra 提交于
struct instruction { struct list_head list; /* 0 16 */ struct hlist_node hash; /* 16 16 */ struct list_head call_node; /* 32 16 */ struct section * sec; /* 48 8 */ long unsigned int offset; /* 56 8 */ /* --- cacheline 1 boundary (64 bytes) --- */ unsigned int len; /* 64 4 */ enum insn_type type; /* 68 4 */ long unsigned int immediate; /* 72 8 */ u16 dead_end:1; /* 80: 0 2 */ u16 ignore:1; /* 80: 1 2 */ u16 ignore_alts:1; /* 80: 2 2 */ u16 hint:1; /* 80: 3 2 */ u16 save:1; /* 80: 4 2 */ u16 restore:1; /* 80: 5 2 */ u16 retpoline_safe:1; /* 80: 6 2 */ u16 noendbr:1; /* 80: 7 2 */ u16 entry:1; /* 80: 8 2 */ /* XXX 7 bits hole, try to pack */ s8 instr; /* 82 1 */ u8 visited; /* 83 1 */ /* XXX 4 bytes hole, try to pack */ struct alt_group * alt_group; /* 88 8 */ struct symbol * call_dest; /* 96 8 */ struct instruction * jump_dest; /* 104 8 */ struct instruction * first_jump_src; /* 112 8 */ struct reloc * jump_table; /* 120 8 */ /* --- cacheline 2 boundary (128 bytes) --- */ struct reloc * reloc; /* 128 8 */ struct list_head alts; /* 136 16 */ struct symbol * sym; /* 152 8 */ - struct list_head stack_ops; /* 160 16 */ - struct cfi_state * cfi; /* 176 8 */ + struct stack_op * stack_ops; /* 160 8 */ + struct cfi_state * cfi; /* 168 8 */ - /* size: 184, cachelines: 3, members: 29 */ - /* sum members: 178, holes: 1, sum holes: 4 */ + /* size: 176, cachelines: 3, members: 29 */ + /* sum members: 170, holes: 1, sum holes: 4 */ /* sum bitfield members: 9 bits, bit holes: 1, sum bit holes: 7 bits */ - /* last cacheline: 56 bytes */ + /* last cacheline: 48 bytes */ }; pre: 5:58.22 real, 226.69 user, 131.22 sys, 26221520 mem post: 5:58.50 real, 229.64 user, 128.65 sys, 26221520 mem Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Acked-by: NJosh Poimboeuf <jpoimboe@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> # build only Tested-by: Thomas Weißschuh <linux@weissschuh.net> # compile and run Link: https://lore.kernel.org/r/20230208172245.362196959@infradead.org
-