- 27 12月, 2019 40 次提交
-
-
由 Nathan Chancellor 提交于
mainline inclusion from mainline-5.1-rc1 commit 6f4e626fb0cc93d50b49b79c2ee33bd769ee57f0 category: bugfix bugzilla: 12189 CVE: NA --------------------------- Clang warns several times in the scsi subsystem (trimmed for brevity): drivers/scsi/hpsa.c:6209:7: warning: overflow converting case value to switch condition type (2147762695 to 18446744071562347015) [-Wswitch] case CCISS_GETBUSTYPES: ^ drivers/scsi/hpsa.c:6208:7: warning: overflow converting case value to switch condition type (2147762694 to 18446744071562347014) [-Wswitch] case CCISS_GETHEARTBEAT: ^ The root cause is that the _IOC macro can generate really large numbers, which don't fit into type 'int', which is used for the cmd parameter in the ioctls in scsi_host_template. My research into how GCC and Clang are handling this at a low level didn't prove fruitful. However, looking at the rest of the kernel tree, all ioctls use an 'unsigned int' for the cmd parameter, which will fit all of the _IOC values in the scsi/ata subsystems. Make that change because none of the ioctls expect a negative value for any command, it brings the ioctls inline with the reset of the kernel, and it removes ambiguity, which is never good when dealing with compilers. Link: https://github.com/ClangBuiltLinux/linux/issues/85 Link: https://github.com/ClangBuiltLinux/linux/issues/154 Link: https://github.com/ClangBuiltLinux/linux/issues/157Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> Acked-by: NBradley Grove <bgrove@attotech.com> Acked-by: NDon Brace <don.brace@microsemi.com> Reviewed-by: NBart Van Assche <bvanassche@acm.org> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Douglas Anderson 提交于
mainline inclusion from mainline-5.1-rc1 commit 31b265b3baaf category: bugfix bugzilla: 12725 CVE: NA ------------------------------------------------- As reported back in 2016-11 [1], the "ftdump" kdb command triggers a BUG for "sleeping function called from invalid context". kdb's "ftdump" command wants to call ring_buffer_read_prepare() in atomic context. A very simple solution for this is to add allocation flags to ring_buffer_read_prepare() so kdb can call it without triggering the allocation error. This patch does that. Note that in the original email thread about this, it was suggested that perhaps the solution for kdb was to either preallocate the buffer ahead of time or create our own iterator. I'm hoping that this alternative of adding allocation flags to ring_buffer_read_prepare() can be considered since it means I don't need to duplicate more of the core trace code into "trace_kdb.c" (for either creating my own iterator or re-preparing a ring allocator whose memory was already allocated). NOTE: another option for kdb is to actually figure out how to make it reuse the existing ftrace_dump() function and totally eliminate the duplication. This sounds very appealing and actually works (the "sr z" command can be seen to properly dump the ftrace buffer). The downside here is that ftrace_dump() fully consumes the trace buffer. Unless that is changed I'd rather not use it because it means "ftdump | grep xyz" won't be very useful to search the ftrace buffer since it will throw away the whole trace on the first grep. A future patch to dump only the last few lines of the buffer will also be hard to implement. [1] https://lkml.kernel.org/r/20161117191605.GA21459@google.com Link: http://lkml.kernel.org/r/20190308193205.213659-1-dianders@chromium.orgReported-by: NBrian Norris <briannorris@chromium.org> Signed-off-by: NDouglas Anderson <dianders@chromium.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xiongfeng Wang 提交于
euler inclusion category: feature Bugzilla: 12567 CVE: N/A ---------------------------------------- When we panic in hardlockup, the secure timer interrupt remains activate because firmware clear eoi after dispatch is completed. This will cause arm_arch_timer interrupt failed to trigger in the second kernel. This patch add a new SMC helper to clear eoi of a certain interrupt and clear eoi of the secure timer before booting the second kernel. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ilya Dryomov 提交于
commit bb229bbb3bf63d23128e851a1f3b85c083178fa1 upstream. Because map updates are distributed lazily, an OSD may not know about the new blacklist for quite some time after "osd blacklist add" command is completed. This makes it possible for a blacklisted but still alive client to overwrite a post-blacklist update, resulting in data corruption. Waiting for latest osdmap in ceph_monc_blacklist_add() and thus using the post-blacklist epoch for all post-blacklist requests ensures that all such requests "wait" for the blacklist to come into force on their respective OSDs. Cc: stable@vger.kernel.org Fixes: 6305a3b4 ("libceph: support for blacklisting clients") Signed-off-by: NIlya Dryomov <idryomov@gmail.com> Reviewed-by: NJason Dillaman <dillaman@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 zhong jiang 提交于
euler inclusion category: bugfix CVE: NA Bugzilla: 13084 --------------------------- when we set a large number in cache_reclaim_s to background thread to reclaim free pagecache, And then set a minimum number to it. It will trigger an unexpected result. we must drain the time slice of cache_reclaim_s. hence we will wait too long time for that. Signed-off-by: Nzhong jiang <zhongjiang@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Sean Christopherson 提交于
commit 152482580a1b0accb60676063a1ac57b2d12daf6 upstream. kvm_arch_memslots_updated() is at this point in time an x86-specific hook for handling MMIO generation wraparound. x86 stashes 19 bits of the memslots generation number in its MMIO sptes in order to avoid full page fault walks for repeat faults on emulated MMIO addresses. Because only 19 bits are used, wrapping the MMIO generation number is possible, if unlikely. kvm_arch_memslots_updated() alerts x86 that the generation has changed so that it can invalidate all MMIO sptes in case the effective MMIO generation has wrapped so as to avoid using a stale spte, e.g. a (very) old spte that was created with generation==0. Given that the purpose of kvm_arch_memslots_updated() is to prevent consuming stale entries, it needs to be called before the new generation is propagated to memslots. Invalidating the MMIO sptes after updating memslots means that there is a window where a vCPU could dereference the new memslots generation, e.g. 0, and incorrectly reuse an old MMIO spte that was created with (pre-wrap) generation==0. Fixes: e59dbe09 ("KVM: Introduce kvm_arch_memslots_updated()") Cc: <stable@vger.kernel.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Willem de Bruijn 提交于
[ Upstream commit 4c3024debf62de4c6ac6d3cb4c0063be21d4f652 ] BPF can adjust gso only for tcp bytestreams. Fail on other gso types. But only on gso packets. It does not touch this field if !gso_size. Fixes: b90efd225874 ("bpf: only adjust gso_size on bytestream protocols") Signed-off-by: NWillem de Bruijn <willemb@google.com> Acked-by: NYonghong Song <yhs@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Jann Horn 提交于
commit a0ce2f0aa6ad97c3d4927bf2ca54bcebdf062d55 upstream. Before this patch, it was possible for two pipes to affect each other after data had been transferred between them with tee(): ============ $ cat tee_test.c int main(void) { int pipe_a[2]; if (pipe(pipe_a)) err(1, "pipe"); int pipe_b[2]; if (pipe(pipe_b)) err(1, "pipe"); if (write(pipe_a[1], "abcd", 4) != 4) err(1, "write"); if (tee(pipe_a[0], pipe_b[1], 2, 0) != 2) err(1, "tee"); if (write(pipe_b[1], "xx", 2) != 2) err(1, "write"); char buf[5]; if (read(pipe_a[0], buf, 4) != 4) err(1, "read"); buf[4] = 0; printf("got back: '%s'\n", buf); } $ gcc -o tee_test tee_test.c $ ./tee_test got back: 'abxx' $ ============ As suggested by Al Viro, fix it by creating a separate type for non-mergeable pipe buffers, then changing the types of buffers in splice_pipe_to_pipe() and link_pipe(). Cc: <stable@vger.kernel.org> Fixes: 7c77f0b3 ("splice: implement pipe to pipe splicing") Fixes: 70524490 ("[PATCH] splice: add support for sys_tee()") Suggested-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Yang Yingliang 提交于
hulk inclusion category: feature bugzilla: 12805 CVE: NA ------------------------------------------------- Patchset "arm64: perf: add nmi support for pmu" change the perf code's old logic, this may have risk, use pmu_nmi_enable to control which code will be used. When pmu_nmi_enable is diabled, perf will use the old code. The default value is false. Enable nmi support add 'pmu_nmi_enable' to command line Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 12804 CVE: NA ------------------------------------------------- Perf event backend for ARMv8 and ARMv7 no longer uses the pmu_lock. The only remaining user is the ARMv6 event backend. Move the pmu_lock out of the generic arm_pmu driver into the ARMv6 code. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Binder Makin 提交于
euler inclusion category: feature/cgroups bugzilla: 12821 CVE: NA ------------------------------------------------- Add a lockless resource controller for limiting the number of open file handles. This allows us to catch misbehaving processes and return EMFILE instead of ENOMEM for kernel memory limits. Original link: https://lwn.net/Articles/604129/.After introduced https://gitlab.indel.ch/thirdparty/linux-indel/commit /5b1efc02. All memory accounting and limiting has been switched over to the lockless page counters. So we convert original resource counters to lockless page counters. Signed-off-by: NBinder Makin <merimus@google.com> Reviewed-by: NQiang Huang <h.huangqiang@huawei.com> [cm: convert to lockless page counters] Signed-off-by: Nluojiajun <luojiajun3@huawei.com> v1->v2 fix some code Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Alexander Shishkin 提交于
mainline inclusion from mainline-5.1-rc1 commit c60f83b813e5b25ccd5de7e8c8925c31b3aebcc1 category: bugfix bugzilla: 11596 CVE: NA ------------------------------------------------- Currently, the address range calculation for file-based filters works as long as the vma that maps the matching part of the object file starts from offset zero into the file (vm_pgoff==0). Otherwise, the resulting filter range would be off by vm_pgoff pages. Another related problem is that in case of a partially matching vma, that is, a vma that matches part of a filter region, the filter range size wouldn't be adjusted. Fix the arithmetics around address filter range calculations, taking into account vma offset, so that the entire calculation is done before the filter configuration is passed to the PMU drivers instead of having those drivers do the final bit of arithmetics. Based on the patch by Adrian Hunter <adrian.hunter.intel.com>. Reported-by: NAdrian Hunter <adrian.hunter@intel.com> Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Tested-by: NMathieu Poirier <mathieu.poirier@linaro.org> Acked-by: NPeter Zijlstra <peterz@infradead.org> Cc: Jiri Olsa <jolsa@redhat.com> Fixes: 375637bc ("perf/core: Introduce address range filtering") Link: http://lkml.kernel.org/r/20190215115655.63469-3-alexander.shishkin@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> (cherry picked from commit c60f83b813e5b25ccd5de7e8c8925c31b3aebcc1) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NCheng Jian <cj.chengjian@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Oleg Nesterov 提交于
mainline inclusion from mainline-5.1-rc1 commit 51bee5abeab2058ea5813c5615d6197a23dbf041 category: bugfix bugzilla: 11797 CVE: NA ------------------------------------------------- The only user of cgroup_subsys->free() callback is pids_cgrp_subsys which needs pids_free() to uncharge the pid. However, ->free() is called from __put_task_struct()->cgroup_free() and this is too late. Even the trivial program which does for (;;) { int pid = fork(); assert(pid >= 0); if (pid) wait(NULL); else exit(0); } can run out of limits because release_task()->call_rcu(delayed_put_task_struct) implies an RCU gp after the task/pid goes away and before the final put(). Test-case: mkdir -p /tmp/CG mount -t cgroup2 none /tmp/CG echo '+pids' > /tmp/CG/cgroup.subtree_control mkdir /tmp/CG/PID echo 2 > /tmp/CG/PID/pids.max perl -e 'while ($p = fork) { wait; } $p // die "fork failed: $!\n"' & echo $! > /tmp/CG/PID/cgroup.procs Without this patch the forking process fails soon after migration. Rename cgroup_subsys->free() to cgroup_subsys->release() and move the callsite into the new helper, cgroup_release(), called by release_task() which actually frees the pid(s). Reported-by: NHerton R. Krzesinski <hkrzesin@redhat.com> Reported-by: NJan Stancek <jstancek@redhat.com> Signed-off-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NTejun Heo <tj@kernel.org> (cherry picked from commit 51bee5abeab2058ea5813c5615d6197a23dbf041) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Heikki Krogerus 提交于
mainline inclusion from mainline-5.1-rc1 commit 2b6e492467c78183bb629bb0a100ea3509b615a5 category: bugfix bugzilla: 11517 CVE: NA ------------------------------------------------- With string type property entries we need to use sizeof(const char *) instead of the number of characters as the length of the entry. If the string was shorter then sizeof(const char *), attempts to read it would have failed with -EOVERFLOW. The problem has been hidden because all build-in string properties have had a string longer then 8 characters until now. Fixes: a85f4204 ("device property: helper macros for property entry creation") Cc: 4.5+ <stable@vger.kernel.org> # 4.5+ Signed-off-by: NHeikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> (cherry picked from commit 2b6e492467c78183bb629bb0a100ea3509b615a5) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Luc Van Oostenryck 提交于
mainline inclusion from mainline-5.1-rc1 commit 62461ac2e5b6520b6d65fc6d7d7b4b8df4b848d8 category: bugfix bugzilla: 11899 CVE: NA ------------------------------------------------- The percpu member of this structure is declared as: struct ... ** __percpu member; So its type is: __percpu pointer to pointer to struct ... But looking at how it's used, its type should be: pointer to __percpu pointer to struct ... and it should thus be declared as: struct ... * __percpu *member; So fix the placement of '__percpu' in the definition of this structures. This silents a few Sparse's warnings like: warning: incorrect type in initializer (different address spaces) expected void const [noderef] <asn:3> *__vpp_verify got struct sched_domain ** Link: http://lkml.kernel.org/r/20190118144902.79065-1-luc.vanoostenryck@gmail.com Fixes: 017c59c0 ("relay: Use per CPU constructs for the relay channel buffer pointers") Signed-off-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Jens Axboe <axboe@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 62461ac2e5b6520b6d65fc6d7d7b4b8df4b848d8) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 NeilBrown 提交于
mainline inclusion from mainline-5.1-rc1 commit 0bdb50c531f7377a9da80d3ce2d61f389c84cb30 category: bugfix bugzilla: 12155 CVE: NA ------------------------------------------------- A dm-raid array with devices larger than 4GB won't assemble on a 32 bit host since _check_data_dev_sectors() was added in 4.16. This is because to_sector() treats its argument as an "unsigned long" which is 32bits (4GB) on a 32bit host. Using "unsigned long long" is more correct. Kernels as early as 4.2 can have other problems due to to_sector() being used on the size of a device. Fixes: 0cf45031 ("dm raid: add support for the MD RAID0 personality") cc: stable@vger.kernel.org (v4.2+) Reported-and-tested-by: NGuillaume Perréal <gperreal@free.fr> Signed-off-by: NNeilBrown <neil@brown.name> Signed-off-by: NMike Snitzer <snitzer@redhat.com> (cherry picked from commit 0bdb50c531f7377a9da80d3ce2d61f389c84cb30) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.1-rc1 commit 494c704f9af0a0cddf593b381ea44320888733e6 category: bugfix bugzilla: NA CVE: NA ------------------------------------------------- The UEFI spec and EDK2 reference implementation both define EFI_GUID as struct { u32 a; u16; b; u16 c; u8 d[8]; }; and so the implied alignment is 32 bits not 8 bits like our guid_t. In some cases (i.e., on 32-bit ARM), this means that firmware services invoked by the kernel may assume that efi_guid_t* arguments are 32-bit aligned, and use memory accessors that do not tolerate misalignment. So let's set the minimum alignment to 32 bits. Note that the UEFI spec as well as some comments in the EDK2 code base suggest that EFI_GUID should be 64-bit aligned, but this appears to be a mistake, given that no code seems to exist that actually enforces that or relies on it. Reported-by: NHeinrich Schuchardt <xypron.glpk@gmx.de> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NLeif Lindholm <leif.lindholm@linaro.org> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Alexander Graf <agraf@suse.de> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Jeffrey Hugo <jhugo@codeaurora.org> Cc: Lee Jones <lee.jones@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Peter Jones <pjones@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-efi@vger.kernel.org Link: http://lkml.kernel.org/r/20190202094119.13230-5-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> (cherry picked from commit 494c704f9af0a0cddf593b381ea44320888733e6) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Andy Shevchenko 提交于
mainline inclusion from mainline-5.0 commit f4d7b3e23d259c44f1f1c39645450680fcd935d6 category: bugfix bugzilla: 11090 CVE: NA ------------------------------------------------- 1 << 31 is Undefined Behaviour according to the C standard. Use U type modifier to avoid theoretical overflow. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> (cherry picked from commit f4d7b3e23d259c44f1f1c39645450680fcd935d6) Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jose Abreu 提交于
[ Upstream commit 4ec5302fa906ec9d86597b236f62315bacdb9622 ] If we don't have DT then stmmac_clk will not be available. Let's add a new Platform Data field so that we can specify the refclk by this mean. This way we can still use the coalesce command in PCI based setups. Signed-off-by: NJose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Douglas Anderson 提交于
mainline inclusion from mainline-5.0 commit eeb35df05244 category: bugfix bugzilla: 11678 CVE: NA ------------------------------------------------- As of the patch ("PM / Domains: Mark "name" const in genpd_dev_pm_attach_by_name()") it's clear that the name in dev_pm_domain_attach_by_name() can be const. Mark it as so. This allows drivers to pass in a name that was declared "const" in a driver. Fixes: 27dceb81 ("PM / Domains: Introduce dev_pm_domain_attach_by_name()") Signed-off-by: NDouglas Anderson <dianders@chromium.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Douglas Anderson 提交于
mainline inclusion from mainline-5.0 commit 7416f1f20687 category: bugfix bugzilla: 11684 CVE: NA ------------------------------------------------- The genpd_dev_pm_attach_by_name() simply takes the name and passes it to of_property_match_string() where the argument is "const char *". Adding a const here allows a later patch to add a const to dev_pm_domain_attach_by_name() which allows drivers to pass in a name that was declared "const" in a driver. Fixes: 5d6be70a ("PM / Domains: Introduce option to attach a device by name to genpd") Signed-off-by: NDouglas Anderson <dianders@chromium.org> Reviewed-by: NStephen Boyd <swboyd@chromium.org> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Viresh Kumar 提交于
commit 625c85a62cb7d3c79f6e16de3cfa972033658250 upstream. The cpufreq_global_kobject is created using kobject_create_and_add() helper, which assigns the kobj_type as dynamic_kobj_ktype and show/store routines are set to kobj_attr_show() and kobj_attr_store(). These routines pass struct kobj_attribute as an argument to the show/store callbacks. But all the cpufreq files created using the cpufreq_global_kobject expect the argument to be of type struct attribute. Things work fine currently as no one accesses the "attr" argument. We may not see issues even if the argument is used, as struct kobj_attribute has struct attribute as its first element and so they will both get same address. But this is logically incorrect and we should rather use struct kobj_attribute instead of struct global_attr in the cpufreq core and drivers and the show/store callbacks should take struct kobj_attribute as argument instead. This bug is caught using CFI CLANG builds in android kernel which catches mismatch in function prototypes for such callbacks. Reported-by: NDonghee Han <dh.han@samsung.com> Reported-by: NSangkyu Kim <skwith.kim@samsung.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 David Howells 提交于
mainline inclusion from mainline-5.0-rc8 commit 822ad64d7e46a8e2c8b8a796738d7b657cbb146d category: bugfix bugzilla: 10783 CVE: NA --------------------------- In the request_key() upcall mechanism there's a dependency loop by which if a key type driver overrides the ->request_key hook and the userspace side manages to lose the authorisation key, the auth key and the internal construction record (struct key_construction) can keep each other pinned. Fix this by the following changes: (1) Killing off the construction record and using the auth key instead. (2) Including the operation name in the auth key payload and making the payload available outside of security/keys/. (3) The ->request_key hook is given the authkey instead of the cons record and operation name. Changes (2) and (3) allow the auth key to naturally be cleaned up if the keyring it is in is destroyed or cleared or the auth key is unlinked. Fixes: 7ee02a316600 ("keys: Fix dependency loop between construction record and auth key") Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NJames Morris <james.morris@microsoft.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> Reviewed-by: NZhangXiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Yufen Yu 提交于
euler inclusion category: bugfix bugzilla: 10984 CVE: NA --------------------------- When .mknod create a block device file in hugetlbfs, it will allocate an inode, and kmalloc a 'struct resv_map' in resv_map_alloc(). For now, inode->i_mapping->private_data is used to point the resv_map. However, when open the device, bd_acquire() will set i_mapping as bd_inode->imapping, result in resv_map memory leak. We fix the leak by adding a new entry resv_map in hugetlbfs_inode_info. It can store resv_map pointer. Programs to reproduce: mount -t hugetlbfs nodev hugetlbfs mknod hugetlbfs/dev b 0 0 exec 30<> hugetlbfs/dev umount hugetlbfs/ Fixes: 9119a41e ("mm, hugetlb: unify region structure handling") Signed-off-by: NYufen Yu <yuyufen@huawei.com> Reviewed-by: NMiao Xie <miaoxie@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Maciej Żenczykowski 提交于
[ Upstream commit 3b707c3008cad04604c1f50e39f456621821c414 ] __bpf_redirect() and act_mirred checks this boolean to determine whether to prefix an ethernet header. Signed-off-by: NMaciej Żenczykowski <maze@google.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Tejun Heo 提交于
[ Upstream commit 7fc5854f8c6efae9e7624970ab49a1eac2faefb1 ] sync_inodes_sb() can race against cgwb (cgroup writeback) membership switches and fail to writeback some inodes. For example, if an inode switches to another wb while sync_inodes_sb() is in progress, the new wb might not be visible to bdi_split_work_to_wbs() at all or the inode might jump from a wb which hasn't issued writebacks yet to one which already has. This patch adds backing_dev_info->wb_switch_rwsem to synchronize cgwb switch path against sync_inodes_sb() so that sync_inodes_sb() is guaranteed to see all the target wbs and inodes can't jump wbs to escape syncing. v2: Fixed misplaced rwsem init. Spotted by Jiufei. Signed-off-by: NTejun Heo <tj@kernel.org> Reported-by: NJiufei Xue <xuejiufei@gmail.com> Link: http://lkml.kernel.org/r/dc694ae2-f07f-61e1-7097-7c8411cee12d@gmail.comAcked-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dou Liyang 提交于
[ Upstream commit 76f99ae5b54d48430d1f0c5512a84da0ff9761e0 ] Linux spreads out the non managed interrupt across the possible target CPUs to avoid vector space exhaustion. Managed interrupts are treated differently, as for them the vectors are reserved (with guarantee) when the interrupt descriptors are initialized. When the interrupt is requested a real vector is assigned. The assignment logic uses the first CPU in the affinity mask for assignment. If the interrupt has more than one CPU in the affinity mask, which happens when a multi queue device has less queues than CPUs, then doing the same search as for non managed interrupts makes sense as it puts the interrupt on the least interrupt plagued CPU. For single CPU affine vectors that's obviously a NOOP. Restructre the matrix allocation code so it does the 'best CPU' search, add the sanity check for an empty affinity mask and adapt the call site in the x86 vector management code. [ tglx: Added the empty mask check to the core and improved change log ] Signed-off-by: NDou Liyang <douly.fnst@cn.fujitsu.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/20180908175838.14450-2-dou_liyang@163.comSigned-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Willem de Bruijn 提交于
mainline inclusion from mainline-v5.0-rc7 commit b90efd225874 category: bugfix bugzilla: 10773 CVE: NA ------------------------------------------------- bpf_skb_change_proto and bpf_skb_adjust_room change skb header length. For GSO packets they adjust gso_size to maintain the same MTU. The gso size can only be safely adjusted on bytestream protocols. Commit d02f51cb ("bpf: fix bpf_skb_adjust_net/bpf_skb_proto_xlat to deal with gso sctp skbs") excluded SKB_GSO_SCTP. Since then type SKB_GSO_UDP_L4 has been added, whose contents are one gso_size unit per datagram. Also exclude these. Move from a blacklist to a whitelist check to future proof against additional such new GSO types, e.g., for fraglist based GRO. Fixes: bec1f6f6 ("udp: generate gso with UDP_SEGMENT") Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NLin Miaohe <linmiaohe@huawei.com> Reviewed-by: NMao Wenan <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Willem de Bruijn 提交于
commit 9e8db591 upstream. GSO packets with vnet_hdr must conform to a small set of gso_types. The below commit uses flow dissection to drop packets that do not. But it has false positives when the skb is not fully initialized. Dissection needs skb->protocol and skb->network_header. Infer skb->protocol from gso_type as the two must agree. SKB_GSO_UDP can use both ipv4 and ipv6, so try both. Exclude callers for which network header offset is not known. Fixes: d5be7f63 ("net: validate untrusted gso packets without csum offload") Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Willem de Bruijn 提交于
commit d5be7f63 upstream. Syzkaller again found a path to a kernel crash through bad gso input. By building an excessively large packet to cause an skb field to wrap. If VIRTIO_NET_HDR_F_NEEDS_CSUM was set this would have been dropped in skb_partial_csum_set. GSO packets that do not set checksum offload are suspicious and rare. Most callers of virtio_net_hdr_to_skb already pass them to skb_probe_transport_header. Move that test forward, change it to detect parse failure and drop packets on failure as those cleary are not one of the legitimate VIRTIO_NET_HDR_GSO types. Fixes: bfd5f4a3 ("packet: Add GSO/csum offload support.") Fixes: f43798c2 ("tun: Allow GSO using virtio_net_hdr") Reported-by: Nsyzbot <syzkaller@googlegroups.com> Signed-off-by: NWillem de Bruijn <willemb@google.com> Reviewed-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Denis Bolotin 提交于
[ Upstream commit 2d533a9287f2011632977e87ce2783f4c689c984 ] In PBL chains with non power of 2 page count, the producer is not at the beginning of the chain when index is 0 after a wrap. Therefore, after the producer index wrap around, page index should be calculated more carefully. Signed-off-by: NDenis Bolotin <dbolotin@marvell.com> Signed-off-by: NAriel Elior <aelior@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hongbo Yao 提交于
euler inclusion category: bugfix bugzilla: 10690 CVE: NA ------------------------------------------------- I ran into this: ========================================================================= UBSAN: Undefined behaviour in ./include/linux/time64.h:70:2 signed integer overflow: 1551059291 + 9223372036854775807 cannot be represented in type 'long long int' CPU: 5 PID: 20064 Comm: syz-executor.2 Not tainted 4.19.24 #4 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xca/0x13e lib/dump_stack.c:113 ubsan_epilogue+0xe/0x81 lib/ubsan.c:159 handle_overflow+0x193/0x1e2 lib/ubsan.c:190 timespec64_add include/linux/time64.h:70 [inline] timekeeping_inject_offset+0x3ed/0x4e0 kernel/time/timekeeping.c:1301 do_adjtimex+0x1e5/0x6c0 kernel/time/timekeeping.c:2360 __do_sys_clock_adjtime+0x122/0x200 kernel/time/posix-timers.c:1086 do_syscall_64+0xc8/0x580 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x462eb9 Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f888aa2dc58 EFLAGS: 00000246 ORIG_RAX: 0000000000000131 RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462eb9 RDX: 0000000000000000 RSI: 00000000200003c0 RDI: 0000000000000000 RBP: 0000000000000002 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f888aa2e6bc R13: 00000000004bcae8 R14: 00000000006f6868 R15: 00000000ffffffff ========================================================================== Since lhs.tv_sec and rhs.tv_sec are both time64_t, this is a signed addition which will cause undefined behaviour on overflow. The easiest way to avoid the overflow is to cast one of the arguments to unsigned (so the addition will be done using unsigned arithmetic). This patch doesn't change generated code. Signed-off-by: NHongbo Yao <yaohongbo@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 David S. Miller 提交于
[ Upstream commit 8681ef1f3d295bd3600315325f3b3396d76d02f6 ] Fixes: 3b89ea9c5902 ("net: Fix for_each_netdev_feature on Big endian") Suggested-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Hauke Mehrtens 提交于
[ Upstream commit 3b89ea9c5902acccdbbdec307c85edd1bf52515e ] The features attribute is of type u64 and stored in the native endianes on the system. The for_each_set_bit() macro takes a pointer to a 32 bit array and goes over the bits in this area. On little Endian systems this also works with an u64 as the most significant bit is on the highest address, but on big endian the words are swapped. When we expect bit 15 here we get bit 47 (15 + 32). This patch converts it more or less to its own for_each_set_bit() implementation which works on 64 bit integers directly. This is then completely in host endianness and should work like expected. Fixes: fd867d51 ("net/core: generic support for disabling netdev features down stack") Signed-off-by: NHauke Mehrtens <hauke.mehrtens@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 shenjian 提交于
The default m88e1510 LED configuration is not fit for hns3 driver. This patch adds a new configuration for it. Feature or Bugfix:Bugfix Signed-off-by: Nshenjian (K) <shenjian15@huawei.com> Reviewed-by: Nlipeng <lipeng321@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiri Olsa 提交于
commit 81ec3f3c4c4d78f2d3b6689c9816bfbdf7417dbb upstream. Vince (and later on Ravi) reported crashes in the BTS code during fuzzing with the following backtrace: general protection fault: 0000 [#1] SMP PTI ... RIP: 0010:perf_prepare_sample+0x8f/0x510 ... Call Trace: <IRQ> ? intel_pmu_drain_bts_buffer+0x194/0x230 intel_pmu_drain_bts_buffer+0x160/0x230 ? tick_nohz_irq_exit+0x31/0x40 ? smp_call_function_single_interrupt+0x48/0xe0 ? call_function_single_interrupt+0xf/0x20 ? call_function_single_interrupt+0xa/0x20 ? x86_schedule_events+0x1a0/0x2f0 ? x86_pmu_commit_txn+0xb4/0x100 ? find_busiest_group+0x47/0x5d0 ? perf_event_set_state.part.42+0x12/0x50 ? perf_mux_hrtimer_restart+0x40/0xb0 intel_pmu_disable_event+0xae/0x100 ? intel_pmu_disable_event+0xae/0x100 x86_pmu_stop+0x7a/0xb0 x86_pmu_del+0x57/0x120 event_sched_out.isra.101+0x83/0x180 group_sched_out.part.103+0x57/0xe0 ctx_sched_out+0x188/0x240 ctx_resched+0xa8/0xd0 __perf_event_enable+0x193/0x1e0 event_function+0x8e/0xc0 remote_function+0x41/0x50 flush_smp_call_function_queue+0x68/0x100 generic_smp_call_function_single_interrupt+0x13/0x30 smp_call_function_single_interrupt+0x3e/0xe0 call_function_single_interrupt+0xf/0x20 </IRQ> The reason is that while event init code does several checks for BTS events and prevents several unwanted config bits for BTS event (like precise_ip), the PERF_EVENT_IOC_PERIOD allows to create BTS event without those checks being done. Following sequence will cause the crash: If we create an 'almost' BTS event with precise_ip and callchains, and it into a BTS event it will crash the perf_prepare_sample() function because precise_ip events are expected to come in with callchain data initialized, but that's not the case for intel_pmu_drain_bts_buffer() caller. Adding a check_period callback to be called before the period is changed via PERF_EVENT_IOC_PERIOD. It will deny the change if the event would become BTS. Plus adding also the limit_period check as well. Reported-by: NVince Weaver <vincent.weaver@maine.edu> Signed-off-by: NJiri Olsa <jolsa@kernel.org> Acked-by: NPeter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20190204123532.GA4794@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xie XiuQi 提交于
hulk inclusion category: feature feature: performance/latency upstream: never bugzilla: 2680,10641 CVE: NA The pervious patch "kmod: run usermodehelpers only on cpus allowed for kthreadd V2" allow to set usermodehelper thread's affinity. add a interface to disable/enable usermodhelper_affinity. Disabled by default. [XQ: backport patch from euleros 2.x: - kmod.c => umh.c ] Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Christoph Lameter 提交于
hulk inclusion category: feature feature: performance/latency upstream: never bugzilla: 2680,10641 CVE: NA isolate this usermodehelper kernel threads to other cpus, to avoid the latency issue. With this patch, the usermodehelper thread could inherit kthreadd's affinity. For example, if you want to isolate usermodehelper to cpu 1: 1) taskset -cp 1 2 # bind kthreadd task (pid = 2) to cpu 1 2) trigger call usermodhelper threads --------------------------------------------- usermodehelper() threads can currently run on all processors. This is an issue for low latency cores. Spawnig a new thread causes cpu holdoffs in the range of hundreds of microseconds to a few milliseconds. Not good for cores on which processes run that need to react as fast as possible. kthreadd threads can be restricted using taskset to a limited set of processors. Then the kernel thread pool will not fork processes on those anymore thereby protecting those processors from additional latencies. Make usermodehelper() threads obey the limitations that kthreadd is restricted to. Kthreadd is not the parent of usermodehelper threads so we need to explicitly get the allowed processors for kthreadd. Before this patch there is no way to limit the cpus that usermodehelper can run on since the affinity is set when the thread is spawned to all processors. [akpm@linux-foundation.org: set_cpus_allowed() doesn't exist when CONFIG_CPUMASK_OFFSTACK=y] [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NChristoph Lameter <cl@linux.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <bitbucket@online.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Gilad Ben-Yossef <gilad@benyossef.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Tejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Link: https://patchwork.kernel.org/patch/3153671/Reported-and-tested-by: NXiangyou Xie <xiexiangyou@huawei.com> [ 1) kmod.c => umh.c 2) ____call_usermodehelper => call_usermodehelper_exec_async ] Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 zhongjiang 提交于
euler inclusion category: bugfix CVE: NA Bugzilla: 9580 --------------------------- The patch control the open/close of the feature by the /proc/sys/vm/cache_reclaim_enable, hence we can completely close all background thread for user demand. Signed-off-by: Nzhongjiang <zhongjiang@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 zhongjiang 提交于
euler inclusion category: bugfix CVE: NA Bugzilla: 9580 --------------------------- Just add Kconfig to the feature. Signed-off-by: Nzhongjiang <zhongjiang@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-