- 09 1月, 2016 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Danesh Petigara 提交于
The link resume logic uses a 200msec delay while debouncing the SControl register. The rationale behind that delay is to accommodate some PHYs that behave badly if their SStatus/ SControl registers are pounded immediately on resume. The Broadcom STB SATA PHY does not seem to have this issue. This patch introduces a new link flag that allows platforms to skip the debounce delay if it isn't needed. Signed-off-by: NDanesh Petigara <dpetigara@broadcom.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 08 1月, 2016 1 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
If the module init code fails after calling ftrace_module_init() and before calling do_init_module(), we can suffer from a memory leak. This is because ftrace_module_init() allocates pages to store the locations that ftrace hooks are placed in the module text. If do_init_module() fails, it still calls the MODULE_GOING notifiers which will tell ftrace to do a clean up of the pages it allocated for the module. But if load_module() fails before then, the pages allocated by ftrace_module_init() will never be freed. Call ftrace_release_mod() on the module if load_module() fails before getting to do_init_module(). Link: http://lkml.kernel.org/r/567CEA31.1070507@intel.comReported-by: N"Qiu, PeiyangX" <peiyangx.qiu@intel.com> Fixes: a949ae56 "ftrace/module: Hardcode ftrace_module_init() call into load_module()" Cc: stable@vger.kernel.org # v2.6.38+ Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 06 1月, 2016 6 次提交
-
-
由 Jiri Olsa 提交于
The sched_entity::avg collides with read-mostly sched_entity data. The perf c2c tool showed many read HITM accesses across many CPUs for sched_entity's cfs_rq and my_q, while having at the same time tons of stores for avg. After placing sched_entity::avg into separate cache line, the perf bench sched pipe showed around 20 seconds speedup. NOTE I cut out all perf events except for cycles and instructions from following output. Before: $ perf stat -r 5 perf bench sched pipe -l 10000000 # Running 'sched/pipe' benchmark: # Executed 10000000 pipe operations between two processes Total time: 270.348 [sec] 27.034805 usecs/op 36989 ops/sec ... 245,537,074,035 cycles # 1.433 GHz 187,264,548,519 instructions # 0.77 insns per cycle 272.653840535 seconds time elapsed ( +- 1.31% ) After: $ perf stat -r 5 perf bench sched pipe -l 10000000 # Running 'sched/pipe' benchmark: # Executed 10000000 pipe operations between two processes Total time: 251.076 [sec] 25.107678 usecs/op 39828 ops/sec ... 244,573,513,928 cycles # 1.572 GHz 187,409,641,157 instructions # 0.76 insns per cycle 251.679315188 seconds time elapsed ( +- 0.31% ) Signed-off-by: NJiri Olsa <jolsa@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Don Zickus <dzickus@redhat.com> Cc: Joe Mario <jmario@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1449606239-28602-1-git-send-email-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Peter Zijlstra 提交于
Some of the sched bitfieds (notably sched_reset_on_fork) can be set on other than current, this can cause the r-m-w to race with other updates. Since all the sched bits are serialized by scheduler locks, pull them in a separate word. Reported-by: NTejun Heo <tj@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: akpm@linux-foundation.org Cc: hannes@cmpxchg.org Cc: mhocko@kernel.org Cc: vdavydov@parallels.com Link: http://lkml.kernel.org/r/20151125150207.GM11639@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Sergey Senozhatsky 提交于
Our global init task can have sub-threads, so ->pid check is not reliable enough for is_global_init(), we need to check tgid instead. This has been spotted by Oleg and a fix was proposed by Richard a long time ago (see the link below). Oleg wrote: : Because is_global_init() is only true for the main thread of /sbin/init. : : Just look at oom_unkillable_task(). It tries to not kill init. But, say, : select_bad_process() can happily find a sub-thread of is_global_init() : and still kill it. I recently hit the problem in question; re-sending the patch (to the best of my knowledge it has never been submitted) with updated function comment. Credit goes to Oleg and Richard. Suggested-by: NRichard Guy Briggs <rgb@redhat.com> Reported-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NSerge Hallyn <serge.hallyn@canonical.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Eric W . Biederman <ebiederm@xmission.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Serge E . Hallyn <serge.hallyn@ubuntu.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://www.redhat.com/archives/linux-audit/2013-December/msg00086.htmlSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Rabin Vincent 提交于
The SKF_AD_ALU_XOR_X ancillary is not like the other ancillary data instructions since it XORs A with X while all the others replace A with some loaded value. All the BPF JITs fail to clear A if this is used as the first instruction in a filter. This was found using american fuzzy lop. Add a helper to determine if A needs to be cleared given the first instruction in a filter, and use this in the JITs. Except for ARM, the rest have only been compile-tested. Fixes: 34805931 ("net: filter: get rid of BPF_S_* enum") Signed-off-by: NRabin Vincent <rabin@rab.in> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Brian Norris 提交于
Spansion and Winbond have occasionally used the same manufacturer ID, and they don't support the same features. Particularly, writing SR=0 seems to break read access for Spansion's s25fl064k. Unfortunately, we don't currently have a way to differentiate these Spansion and Winbond parts, so rather than regressing support for these Spansion flash, let's drop the new Winbond lock/unlock support for now. We can try to address Winbond support during the next release cycle. Original discussion: http://patchwork.ozlabs.org/patch/549173/ http://patchwork.ozlabs.org/patch/553683/ Fixes: 357ca38d ("mtd: spi-nor: support lock/unlock/is_locked for Winbond") Fixes: c6fc2171 ("mtd: spi-nor: disable protection for Winbond flash at startup") Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Reported-by: NFelix Fietkau <nbd@openwrt.org> Cc: Felix Fietkau <nbd@openwrt.org>
-
由 Jaehoon Chung 提交于
Removed the unused quirks. These quirks don't used anywhere. Signed-off-by: NJaehoon Chung <jh80.chung@samsung.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 31 12月, 2015 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 30 12月, 2015 2 次提交
-
-
由 Heiko Carstens 提交于
mod_zone_page_state() takes a "delta" integer argument. delta contains the number of pages that should be added or subtracted from a struct zone's vm_stat field. If a zone is larger than 8TB this will cause overflows. E.g. for a zone with a size slightly larger than 8TB the line mod_zone_page_state(zone, NR_ALLOC_BATCH, zone->managed_pages); in mm/page_alloc.c:free_area_init_core() will result in a negative result for the NR_ALLOC_BATCH entry within the zone's vm_stat, since 8TB contain 0x8xxxxxxx pages which will be sign extended to a negative value. Fix this by changing the delta argument to long type. This could fix an early boot problem seen on s390, where we have a 9TB system with only one node. ZONE_DMA contains 2GB and ZONE_NORMAL the rest. The system is trying to allocate a GFP_DMA page but ZONE_DMA is completely empty, so it tries to reclaim pages in an endless loop. This was seen on a heavily patched 3.10 kernel. One possible explaination seem to be the overflows caused by mod_zone_page_state(). Unfortunately I did not have the chance to verify that this patch actually fixes the problem, since I don't have access to the system right now. However the overflow problem does exist anyway. Given the description that a system with slightly less than 8TB does work, this seems to be a candidate for the observed problem. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Al Viro 提交于
all callers are better off with kfree_put_link() Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 29 12月, 2015 1 次提交
-
-
由 Jens Axboe 提交于
We currently only have an inline/sync helper to restart a stopped queue. If drivers need an async version, they have to roll their own. Add a generic helper instead. Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 24 12月, 2015 1 次提交
-
-
由 Bjørn Mork 提交于
NCM buffer sizes are negotiated with the device independently of the network device MTU. The RX buffers are allocated by the usbnet framework based on the rx_urb_size value set by cdc_ncm. A single RX buffer can hold a number of MTU sized packets. The default usbnet change_mtu ndo only modifies rx_urb_size if it is equal to hard_mtu. And the cdc_ncm driver will set rx_urb_size and hard_mtu independently of each other, based on dwNtbInMaxSize and dwNtbOutMaxSize respectively. It was therefore assumed that usbnet_change_mtu() would never touch rx_urb_size. This failed to consider the case where dwNtbInMaxSize and dwNtbOutMaxSize happens to be equal. Fix by implementing an NCM specific change_mtu ndo, modifying the netdev MTU without touching the buffer size settings. Signed-off-by: NBjørn Mork <bjorn@mork.no> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 12月, 2015 5 次提交
-
-
由 Arnd Bergmann 提交于
The dw_mmc driver stores the physical address of the MMIO registers in a pointer, which requires the use of type casts, and is actually broken if anyone ever has this device on a 32-bit SoC in registers above 4GB. Gcc warns about this possibility when the driver is built with ARM LPAE enabled: mmc/host/dw_mmc.c: In function 'dw_mci_edmac_start_dma': mmc/host/dw_mmc.c:702:17: warning: cast from pointer to integer of different size cfg.dst_addr = (dma_addr_t)(host->phy_regs + fifo_offset); ^ mmc/host/dw_mmc-pltfm.c: In function 'dw_mci_pltfm_register': mmc/host/dw_mmc-pltfm.c:63:19: warning: cast to pointer from integer of different size host->phy_regs = (void *)(regs->start); This changes the code to use resource_size_t, which gets rid of the warning, the bug and the useless casts. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJaehoon Chung <jh80.chung@samsung.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Carlo Caione 提交于
This patch introduce a new MMC_CAP2_NO_SDIO cap used to tell the mmc core to not send SDIO specific commands. Signed-off-by: NCarlo Caione <carlo@endlessm.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Linus Walleij 提交于
This platform data struct is only used inside the MVSDIO driver, nowhere else in the entire kernel. Move the struct into the driver and delete the external header. Cc: Nicolas Pitre <nico@fluxnic.net> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Ulf Hansson 提交于
Instead of checking for "#ifdef" directly in the code, let's invent a pair of mmc core functions to deal with register/unregister the MMC PM notifier block. Implement stubs for these functions when CONFIG_PM_SLEEP is unset, as in that case the PM notifiers isn't used. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Ulf Hansson 提交于
Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 21 12月, 2015 3 次提交
-
-
由 Suravee Suthikulpanit 提交于
This patch introduces gicv2m_acpi_init(), which uses information in MADT GIC MSI frames structure to initialize GICv2m driver. It also exposes gicv2m_init() function, which simplifies callers to a single GICv2m init function. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NDuc Dang <dhdang@apm.com> Acked-by: NRafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Suravee Suthikulpanit 提交于
Since there will be several places checking if fwnode.type is equal FWNODE_IRQCHIP, this patch adds a convenient function for this purpose. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Suravee Suthikulpanit 提交于
This patch introduces pci_msi_register_fwnode_provider() for irqchip to register a callback, to provide a way to determine appropriate MSI domain for a pci device. It also introduces pci_host_bridge_acpi_msi_domain(), which returns the MSI domain of the specified PCI host bridge with DOMAIN_BUS_PCI_MSI bus token. Then, it is assigned to pci device. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 19 12月, 2015 4 次提交
-
-
由 Hidehiro Kawai 提交于
Currently, panic() and crash_kexec() can be called at the same time. For example (x86 case): CPU 0: oops_end() crash_kexec() mutex_trylock() // acquired nmi_shootdown_cpus() // stop other CPUs CPU 1: panic() crash_kexec() mutex_trylock() // failed to acquire smp_send_stop() // stop other CPUs infinite loop If CPU 1 calls smp_send_stop() before nmi_shootdown_cpus(), kdump fails. In another case: CPU 0: oops_end() crash_kexec() mutex_trylock() // acquired <NMI> io_check_error() panic() crash_kexec() mutex_trylock() // failed to acquire infinite loop Clearly, this is an undesirable result. To fix this problem, this patch changes crash_kexec() to exclude others by using the panic_cpu atomic. Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Baoquan He <bhe@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: kexec@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Minfei Huang <mnfhuang@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Seth Jennings <sjenning@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20151210014630.25437.94161.stgit@softrsSigned-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Hidehiro Kawai 提交于
Currently, kdump_nmi_shootdown_cpus(), a subroutine of crash_kexec(), sends an NMI IPI to CPUs which haven't called panic() to stop them, save their register information and do some cleanups for crash dumping. However, if such a CPU is infinitely looping in NMI context, we fail to save its register information into the crash dump. For example, this can happen when unknown NMIs are broadcast to all CPUs as follows: CPU 0 CPU 1 =========================== ========================== receive an unknown NMI unknown_nmi_error() panic() receive an unknown NMI spin_trylock(&panic_lock) unknown_nmi_error() crash_kexec() panic() spin_trylock(&panic_lock) panic_smp_self_stop() infinite loop kdump_nmi_shootdown_cpus() issue NMI IPI -----------> blocked until IRET infinite loop... Here, since CPU 1 is in NMI context, the second NMI from CPU 0 is blocked until CPU 1 executes IRET. However, CPU 1 never executes IRET, so the NMI is not handled and the callback function to save registers is never called. In practice, this can happen on some servers which broadcast NMIs to all CPUs when the NMI button is pushed. To save registers in this case, we need to: a) Return from NMI handler instead of looping infinitely or b) Call the callback function directly from the infinite loop Inherently, a) is risky because NMI is also used to prevent corrupted data from being propagated to devices. So, we chose b). This patch does the following: 1. Move the infinite looping of CPUs which haven't called panic() in NMI context (actually done by panic_smp_self_stop()) outside of panic() to enable us to refer pt_regs. Please note that panic_smp_self_stop() is still used for normal context. 2. Call a callback of kdump_nmi_shootdown_cpus() directly to save registers and do some cleanups after setting waiting_for_crash_ipi which is used for counting down the number of CPUs which handled the callback Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Dave Young <dyoung@redhat.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Javi Merino <javi.merino@arm.com> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: kexec@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: lkml <linux-kernel@vger.kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Seth Jennings <sjenning@redhat.com> Cc: Stefan Lippers-Hollmann <s.l-h@gmx.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ulrich Obergfell <uobergfe@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Link: http://lkml.kernel.org/r/20151210014628.25437.75256.stgit@softrs [ Cleanup comments, fixup formatting. ] Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Hidehiro Kawai 提交于
If panic on NMI happens just after panic() on the same CPU, panic() is recursively called. Kernel stalls, as a result, after failing to acquire panic_lock. To avoid this problem, don't call panic() in NMI context if we've already entered panic(). For that, introduce nmi_panic() macro to reduce code duplication. In the case of panic on NMI, don't return from NMI handlers if another CPU already panicked. Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Don Zickus <dzickus@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Javi Merino <javi.merino@arm.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: kexec@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: lkml <linux-kernel@vger.kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Seth Jennings <sjenning@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ulrich Obergfell <uobergfe@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/20151210014626.25437.13302.stgit@softrs [ Cleanup comments, fixup formatting. ] Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 James Morse 提交于
mmdebug.h uses BUILD_BUG_ON_INVALID(), assuming someone else included linux/bug.h. Include it ourselves. This saves build-failures such as: arch/arm64/include/asm/pgtable.h: In function 'set_pte_at': arch/arm64/include/asm/pgtable.h:281:3: error: implicit declaration of function 'BUILD_BUG_ON_INVALID' [-Werror=implicit-function-declaration] VM_WARN_ONCE(!pte_young(pte), Fixes: 02602a18 ("bug: completely remove code generated by disabled VM_BUG_ON()") Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 12月, 2015 5 次提交
-
-
由 Linus Walleij 提交于
The ARM RealView PB11MPCore reference design has some special bits in a system controller register to set up the GIC in one of three modes: legacy, new with DCC, new without DCC. The register is also used to enable FIQ. Since the platform will not boot unless this register is set up to "new with DCC" mode, we need a special quirk to be compiled-in for the RealView platforms. If we find the right compatible string on the GIC TestChip, we enable this quirk by looking up the system controller and enabling the special bits. We depend on the CONFIG_REALVIEW_DT Kconfig symbol as the old boardfile code has the same fix hardcoded, and this is only needed for the attempts to modernize the RealView code using device tree. After fixing this, the PB11MPCore boots with device tree only. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We almost have all the needed bits requiredable to create a irq domain on top of a MSI domain. For this, we enable a few things: - the virq is stored in the msi_desc - device, msi_alloc_info and domain-specific data are stored in the platform_priv_data structure - we introduce a new API for platform-msi: /* Create a MSI-based domain */ struct irq_domain * platform_msi_create_device_domain(struct device *dev, unsigned int nvec, irq_write_msi_msg_t write_msi_msg, const struct irq_domain_ops *ops, void *host_data); /* Allocate MSIs in an MSI domain */ int platform_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, unsigned int nr_irqs); /* Free MSIs from an MSI domain */ void platform_msi_domain_free(struct irq_domain *domain, unsigned int virq, unsigned int nvec); /* Obtain the host data passed to platform_msi_create_device_domain */ void *platform_msi_get_host_data(struct irq_domain *domain); platform_msi_create_device_domain() is a hybrid of irqdomain creation and interrupt allocation, creating a domain backed by the MSIs associated to a device. IRQs can then be allocated in that domain using platform_msi_domain_alloc(). This now allows a wired irq to MSI bridge to be created. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
To be able to allocate interrupts from the MSI layer down, add a new msi_domain_populate_irqs entry point. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
The .prepare callbacks are so far only called from msi_domain_alloc_irqs. In order to reuse that code, split that code and create a msi_domain_prepare_irqs function that the existing code can call into. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
We are soon going to need the MSI layer to call into the domain allocators. Instead of open coding this, make the standard irq_domain_alloc_irqs_recursive function available to the MSI layer. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 15 12月, 2015 1 次提交
-
-
由 Daniel Lezcano 提交于
When we try to compile a clocksource driver with the COMPILE_TEST option, we can't select the GENERIC_SCHED_CLOCK because the sched_clock() symbol will be duplicated with the one defined for the x86. In order to fix that, we don't select the GENERIC_SCHED_CLOCK in the driver Kconfig's file but we define some empty functions for the different symbols in order to prevent the unresolved ones. This patch fixes the COMPILE_TEST option for the compile test coverage for the clocksource drivers. Without this patch, we can't add the COMPILE_TEST option for the clocksource drivers using the GENERIC_SCHED_CLOCK. Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
-
- 14 12月, 2015 3 次提交
-
-
由 Thomas Gleixner 提交于
The new VMD device driver needs to iterate over a list of "demultiplexing" interrupts. Protecting that list with a lock is not possible because the list is also required in code pathes which hold irq descriptor lock. Therefor the demultiplexing interrupt handler would create a lock inversion scenario if it calls a demux handler with the list protection lock held. A solution for this is to free the irq descriptor via RCU, so the list can be walked with rcu read lock held. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Keith Busch <keith.busch@intel.com>
-
由 Andreas Gruenbacher 提交于
Change the list operation to only return whether or not an attribute should be listed. Copying the attribute names into the buffer is moved to the callers. Since the result only depends on the dentry and not on the attribute name, we do not pass the attribute name to list operations. Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Peter Zijlstra 提交于
Jan Stancek reported that I wrecked things for him by fixing things for Vladimir :/ His report was due to an UNINTERRUPTIBLE wait getting -EINTR, which should not be possible, however my previous patch made this possible by unconditionally checking signal_pending(). We cannot use current->state as was done previously, because the instruction after the store to that variable it can be changed. We must instead pass the initial state along and use that. Fixes: 68985633 ("sched/wait: Fix signal handling in bit wait helpers") Reported-by: NJan Stancek <jstancek@redhat.com> Reported-by: NChris Mason <clm@fb.com> Tested-by: NJan Stancek <jstancek@redhat.com> Tested-by: NVladimir Murzin <vladimir.murzin@arm.com> Tested-by: NChris Mason <clm@fb.com> Reviewed-by: NPaul Turner <pjt@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: tglx@linutronix.de Cc: Oleg Nesterov <oleg@redhat.com> Cc: hpa@zytor.com Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 12月, 2015 2 次提交
-
-
由 Chris Wilson 提交于
Currently the full stop_machine() routine is only enabled on SMP if module unloading is enabled, or if the CPUs are hotpluggable. This leads to configurations where stop_machine() is broken as it will then only run the callback on the local CPU with irqs disabled, and not stop the other CPUs or run the callback on them. For example, this breaks MTRR setup on x86 in certain configs since ea8596bb ("kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions") as the MTRR is only established on the boot CPU. This patch removes the Kconfig option for STOP_MACHINE and uses the SMP and HOTPLUG_CPU config options to compile the correct stop_machine() for the architecture, removing the false dependency on MODULE_UNLOAD in the process. Link: https://lkml.org/lkml/2014/10/8/124 References: https://bugs.freedesktop.org/show_bug.cgi?id=84794Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Acked-by: NIngo Molnar <mingo@kernel.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Pranith Kumar <bobby.prani@gmail.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Vladimir Davydov <vdavydov@parallels.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: H. Peter Anvin <hpa@linux.intel.com> Cc: Tejun Heo <tj@kernel.org> Cc: Iulia Manda <iulia.manda21@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Chuck Ebbert <cebbert.lkml@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nicolas Iooss 提交于
The kmemleak_init() definition in mm/kmemleak.c is marked __init but its prototype in include/linux/kmemleak.h is marked __ref since commit a6186d89 ("kmemleak: Mark the early log buffer as __initdata"). This causes a section mismatch which is reported as a warning when building with clang -Wsection, because kmemleak_init() is declared in section .ref.text but defined in .init.text. Fix this by marking kmemleak_init() prototype __init. Signed-off-by: NNicolas Iooss <nicolas.iooss_linux@m4x.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 12月, 2015 2 次提交
-
-
由 Alan Stern 提交于
Some USB device / host controller combinations seem to have problems with Link Power Management. For example, Steinar found that his xHCI controller wouldn't handle bandwidth calculations correctly for two video cards simultaneously when LPM was enabled, even though the bus had plenty of bandwidth available. This patch introduces a new quirk flag for devices that should remain disabled for LPM, and creates quirk entries for Steinar's devices. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Reported-by: NSteinar H. Gunderson <sgunderson@bigfoot.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 James Bottomley 提交于
KASAN found that our additional element processing scripts drop off the end of the VPD page into unallocated space. The reason is that not every element has additional information but our traversal routines think they do, leading to them expecting far more additional information than is present. Fix this by adding a gate to the traversal routine so that it only processes elements that are expected to have additional information (list is in SES-2 section 6.1.13.1: Additional Element Status diagnostic page overview) Reported-by: NPavel Tikhomirov <ptikhomirov@virtuozzo.com> Tested-by: NPavel Tikhomirov <ptikhomirov@virtuozzo.com> Cc: stable@vger.kernel.org Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
- 11 12月, 2015 1 次提交
-
-
由 John Stultz 提交于
For adjtimex()'s ADJ_SETOFFSET, make sure the tv_usec value is sane. We might multiply them later which can cause an overflow and undefined behavior. This patch introduces new helper functions to simplify the checking code and adds comments to clarify Orginally this patch was by Sasha Levin, but I've basically rewritten it, so he should get credit for finding the issue and I should get the blame for any mistakes made since. Also, credit to Richard Cochran for the phrasing used in the comment for what is considered valid here. Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Reported-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-