- 10 9月, 2017 1 次提交
-
-
由 Anthony Yznaga 提交于
For many sun4v processor types, reading or writing a privileged register has a latency of 40 to 70 cycles. Use a combination of the low-latency allclean, otherw, normalw, and nop instructions in etrap and rtrap to replace 2 rdpr and 5 wrpr instructions and improve etrap/rtrap performance. allclean, otherw, and normalw are available on NG2 and later processors. The average ticks to execute the flush windows trap ("ta 0x3") with and without this patch on select platforms: CPU Not patched Patched % Latency Reduction NG2 1762 1558 -11.58 NG4 3619 3204 -11.47 M7 3015 2624 -12.97 SPARC64-X 829 770 -7.12 Signed-off-by: NAnthony Yznaga <anthony.yznaga@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 9月, 2017 1 次提交
-
-
由 Matthew Wilcox 提交于
Where possible, call memset16(), memmove() or memcpy() instead of using open-coded loops. I don't like the calling convention that uses a byte count instead of a count of u16s, but it's a little late to change that. Reduces code size of fbcon.o by almost 400 bytes on my laptop build. [akpm@linux-foundation.org: fix build] Link: http://lkml.kernel.org/r/20170720184539.31609-9-willy@infradead.orgSigned-off-by: NMatthew Wilcox <mawilcox@microsoft.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Minchan Kim <minchan@kernel.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 8月, 2017 1 次提交
-
-
由 Jiri Slaby 提交于
There is code duplicated over all architecture's headers for futex_atomic_op_inuser. Namely op decoding, access_ok check for uaddr, and comparison of the result. Remove this duplication and leave up to the arches only the needed assembly which is now in arch_futex_atomic_op_inuser. This effectively distributes the Will Deacon's arm64 fix for undefined behaviour reported by UBSAN to all architectures. The fix was done in commit 5f16a046 (arm64: futex: Fix undefined behaviour with FUTEX_OP_OPARG_SHIFT usage). Look there for an example dump. And as suggested by Thomas, check for negative oparg too, because it was also reported to cause undefined behaviour report. Note that s390 removed access_ok check in d12a2970 ("s390/uaccess: remove pointless access_ok() checks") as access_ok there returns true. We introduce it back to the helper for the sake of simplicity (it gets optimized away anyway). Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> [s390] Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Reviewed-by: NDarren Hart (VMware) <dvhart@infradead.org> Reviewed-by: Will Deacon <will.deacon@arm.com> [core/arm64] Cc: linux-mips@linux-mips.org Cc: Rich Felker <dalias@libc.org> Cc: linux-ia64@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: peterz@infradead.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: sparclinux@vger.kernel.org Cc: Jonas Bonn <jonas@southpole.se> Cc: linux-s390@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: linux-hexagon@vger.kernel.org Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-snps-arc@lists.infradead.org Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: linux-xtensa@linux-xtensa.org Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> Cc: openrisc@lists.librecores.org Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Stafford Horne <shorne@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Richard Henderson <rth@twiddle.net> Cc: Chris Zankel <chris@zankel.net> Cc: Michal Simek <monstr@monstr.eu> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-parisc@vger.kernel.org Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: linux-alpha@vger.kernel.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: linuxppc-dev@lists.ozlabs.org Cc: "David S. Miller" <davem@davemloft.net> Link: http://lkml.kernel.org/r/20170824073105.3901-1-jslaby@suse.cz
-
- 17 8月, 2017 1 次提交
-
-
由 Paul E. McKenney 提交于
There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes the underlying arch-specific arch_spin_unlock_wait() for all architectures providing them. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: <linux-arch@vger.kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrea Parri <parri.andrea@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NBoqun Feng <boqun.feng@gmail.com>
-
- 16 8月, 2017 3 次提交
-
-
由 Nitin Gupta 提交于
Adds support for 16GB hugepage size. To use this page size use kernel parameters as: default_hugepagesz=16G hugepagesz=16G hugepages=10 Testing: Tested with the stream benchmark which allocates 48G of arrays backed by 16G hugepages and does RW operation on them in parallel. Orabug: 25362942 Cc: Anthony Yznaga <anthony.yznaga@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nitin Gupta 提交于
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jag Raman 提交于
Enables VCC port probe and removal to initialize and terminate VCC ports respectively. When a device/port matching the VCC driver is added, the probe function is invoked along with a reference to the device. remove function is called when the device is removed. Also add APIs to cache and retrieve VCC ports from a VCC table Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 8月, 2017 2 次提交
-
-
由 Zi Yan 提交于
THP migration is added but only supports x86_64 at the moment. For all other architectures, swp_entry_to_pmd() only returns a zero pmd_t. Due to a GCC zero initializer bug #53119, the standard (pmd_t){0} initializer is not accepted by all GCC versions. __pmd() is a feasible workaround. In addition, sparc32's pmd_t is an array instead of a single value, so we need (pmd_t){ {0}, } instead of (pmd_t){0}. Thus, a different __pmd() definition is needed in sparc32. Signed-off-by: NZi Yan <zi.yan@cs.rutgers.edu> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
It overflows the amount of space available in the initial .text section of trap handler assembler in some configurations, resulting in build failures. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 8月, 2017 5 次提交
-
-
由 Peter Zijlstra 提交于
Those architectures that have a special atomic_set implementation also need a special atomic_set_release(), because for the very same reason WRITE_ONCE() is broken for them, smp_store_release() is too. The vast majority is architectures that have spinlock hash based atomic implementation except hexagon which seems to have a hardware 'feature'. The spinlock based atomics should be SC, that is, none of them appear to place extra barriers in atomic_cmpxchg() or any of the other SC atomic primitives and therefore seem to rely on their spinlock implementation being SC (I did not fully validate all that). Therefore, the normal atomic_set() is SC and can be used at atomic_set_release(). Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Chris Metcalf <cmetcalf@mellanox.com> [for tile] Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: davem@davemloft.net Cc: james.hogan@imgtec.com Cc: jejb@parisc-linux.org Cc: rkuo@codeaurora.org Cc: vgupta@synopsys.com Link: http://lkml.kernel.org/r/20170609110506.yod47flaav3wgoj5@hirez.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Vijay Kumar 提交于
Use CPU_POKE hypervisor call to resume idle cpu if supported. Signed-off-by: NVijay Kumar <vijay.ac.kumar@oracle.com> Reviewed-by: NAnthony Yznaga <anthony.yznaga@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vijay Kumar 提交于
This adds a new hypercall CPU_POKE for quickly waking up an idle CPU. CPU_POKE should only be sent to valid non-local CPUs. Signed-off-by: NRob Gardner <rob.gardner@oracle.com> Signed-off-by: NVijay Kumar <vijay.ac.kumar@oracle.com> Reviewed-by: NAnthony Yznaga <anthony.yznaga@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nitin Gupta 提交于
Adds support for 16GB hugepage size. To use this page size use kernel parameters as: default_hugepagesz=16G hugepagesz=16G hugepages=10 Testing: Tested with the stream benchmark which allocates 48G of arrays backed by 16G hugepages and does RW operation on them in parallel. Orabug: 25362942 Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nitin Gupta 提交于
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: NNitin Gupta <nitin.m.gupta@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 8月, 2017 2 次提交
-
-
由 Allen Pais 提交于
Recognize SPARC-M8 cpu type, hardware caps and cpu distribution map. Signed-off-by: NAllen Pais <allen.pais@oracle.com> Signed-off-by: NDavid Aldridge <david.j.aldridge@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Allen Pais 提交于
Acked-by: NSam Ravnborg <sam@ravnborg.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NAllen Pais <allen.pais@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 8月, 2017 1 次提交
-
-
由 Willem de Bruijn 提交于
The send call ignores unknown flags. Legacy applications may already unwittingly pass MSG_ZEROCOPY. Continue to ignore this flag unless a socket opts in to zerocopy. Introduce socket option SO_ZEROCOPY to enable MSG_ZEROCOPY processing. Processes can also query this socket option to detect kernel support for the feature. Older kernels will return ENOPROTOOPT. Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 7月, 2017 1 次提交
-
-
由 Rob Gardner 提交于
This fixes another cause of random segfaults and bus errors that may occur while running perf with the callgraph option. Critical sections beginning with spin_lock_irqsave() raise the interrupt level to PIL_NORMAL_MAX (14) and intentionally do not block performance counter interrupts, which arrive at PIL_NMI (15). But some sections of code are "super critical" with respect to perf because the perf_callchain_user() path accesses user space and may cause TLB activity as well as faults as it unwinds the user stack. One particular critical section occurs in switch_mm: spin_lock_irqsave(&mm->context.lock, flags); ... load_secondary_context(mm); tsb_context_switch(mm); ... spin_unlock_irqrestore(&mm->context.lock, flags); If a perf interrupt arrives in between load_secondary_context() and tsb_context_switch(), then perf_callchain_user() could execute with the context ID of one process, but with an active TSB for a different process. When the user stack is accessed, it is very likely to incur a TLB miss, since the h/w context ID has been changed. The TLB will then be reloaded with a translation from the TSB for one process, but using a context ID for another process. This exposes memory from one process to another, and since it is a mapping for stack memory, this usually causes the new process to crash quickly. This super critical section needs more protection than is provided by spin_lock_irqsave() since perf interrupts must not be allowed in. Since __tsb_context_switch already goes through the trouble of disabling interrupts completely, we fix this by moving the secondary context load down into this better protected region. Orabug: 25577560 Signed-off-by: NDave Aldridge <david.j.aldridge@oracle.com> Signed-off-by: NRob Gardner <rob.gardner@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 7月, 2017 1 次提交
-
-
This ioctl does nothing to justify an _IOC_READ or _IOC_WRITE flag because it doesn't copy anything from/to userspace to access the argument. Fixes: 54ebbfb1 ("tty: add TIOCGPTPEER ioctl") Signed-off-by: NGleb Fotengauer-Malinovskiy <glebfm@altlinux.org> Acked-by: NAleksa Sarai <asarai@suse.de> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 15 7月, 2017 1 次提交
-
-
由 Jane Chu 提交于
A large sun4v SPARC system may have moments of intensive xcall activities, usually caused by unmapping many pages on many CPUs concurrently. This can flood receivers with CPU mondo interrupts for an extended period, causing some unlucky senders to hit send-mondo timeout. This problem gets worse as cpu count increases because sometimes mappings must be invalidated on all CPUs, and sometimes all CPUs may gang up on a single CPU. But a busy system is not a broken system. In the above scenario, as long as the receiver is making forward progress processing mondo interrupts, the sender should continue to retry. This patch implements the receiver's forward progress meter by introducing a per cpu counter 'cpu_mondo_counter[cpu]' where 'cpu' is in the range of 0..NR_CPUS. The receiver increments its counter as soon as it receives a mondo and the sender tracks the receiver's counter. If the receiver has stopped making forward progress when the retry limit is reached, the sender declares send-mondo-timeout and panic; otherwise, the receiver is allowed to keep making forward progress. In addition, it's been observed that PCIe hotplug events generate Correctable Errors that are handled by hypervisor and then OS. Hypervisor 'borrows' a guest cpu strand briefly to provide the service. If the cpu strand is simultaneously the only cpu targeted by a mondo, it may not be available for the mondo in 20msec, causing SUN4V mondo timeout. It appears that 1 second is the agreed wait time between hypervisor and guest OS, this patch makes the adjustment. Orabug: 25476541 Orabug: 26417466 Signed-off-by: NJane Chu <jane.chu@oracle.com> Reviewed-by: NSteve Sistare <steven.sistare@oracle.com> Reviewed-by: NAnthony Yznaga <anthony.yznaga@oracle.com> Reviewed-by: NRob Gardner <rob.gardner@oracle.com> Reviewed-by: NThomas Tai <thomas.tai@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 7月, 2017 1 次提交
-
-
由 Nicholas Piggin 提交于
For architectures that define HAVE_NMI_WATCHDOG, instead of having them provide the complete touch_nmi_watchdog() function, just have them provide arch_touch_nmi_watchdog(). This gives the generic code more flexibility in implementing this function, and arch implementations don't miss out on touching the softlockup watchdog or other generic details. Link: http://lkml.kernel.org/r/20170616065715.18390-3-npiggin@gmail.comSigned-off-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NDon Zickus <dzickus@redhat.com> Reviewed-by: NBabu Moger <babu.moger@oracle.com> Tested-by: Babu Moger <babu.moger@oracle.com> [sparc] Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2017 1 次提交
-
-
由 Masahiro Yamada 提交于
Since commit fcc8487d ("uapi: export all headers under uapi directories"), all (and only) headers under uapi directories are exported, but asm-generic wrappers are still exceptions. To complete de-coupling the uapi from kernel headers, move generic-y of exported headers to uapi/asm/Kbuild. With this change, "make headers_install" will just need to parse uapi/asm/Kbuild to build up exported headers. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 04 7月, 2017 1 次提交
-
-
由 Al Viro 提交于
no users left Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 29 6月, 2017 1 次提交
-
-
由 Tobias Klauser 提交于
The only user of thread_saved_pc() in non-arch-specific code was removed in commit 8243d559 ("sched/core: Remove pointless printout in sched_show_task()"). Remove the implementations as well. Some architectures use thread_saved_pc() in their arch-specific code. Leave their thread_saved_pc() intact. Signed-off-by: NTobias Klauser <tklauser@distanz.ch> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 6月, 2017 3 次提交
-
-
由 Christoph Hellwig 提交于
Usually dma_supported decisions are done by the dma_map_ops instance. Switch sparc to that model by providing a ->dma_supported instance for sbus that always returns false, and implementations tailored to the sun4u and sun4v cases for sparc64, and leave it unimplemented for PCI on sparc32, which means always supported. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
由 Christoph Hellwig 提交于
We can just use pci32_dma_ops directly. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
由 Christoph Hellwig 提交于
DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 6月, 2017 5 次提交
-
-
由 Jag Raman 提交于
Add port_id field to VIO device metadata to identify the port of VIO device. Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jag Raman 提交于
Enhances search for VIO device in MDESC by leveraging already existing MDESC APIs. Enhances changes in earlier patch, "sparc: Machine description indices can vary", by using existing MD search functions. It also specifies a match function, thereby enabling device_find_child() to use it for the purpose of matching device nodes in MDESC. An API to find VDEV node in MDESC based on its md_node_info is also added. It is planned to be used by VIO device clients in the future. Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jag Raman 提交于
- Allocate IRQs for VIO devices during probing. - Allow clients to specify if IRQs would be allocated for a given VIO device. - Cache the device handle of the root node of channel-devices sub-tree in Machine Description (MDESC). Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jag Raman 提交于
Add the MDESC node name of MDESC client to VIO device metadata. It is later used to uniquely identify a node in the MDESC. VIO & MDESC APIs are updated to handle this node name. Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jag Raman 提交于
Add the following two APIs to Machine Description (MDESC) - mdesc_get_node: Searches for a node in the Machine Description tree based on given information about that node. - mdesc_get_node_info: Retrieves information about a given node. Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 6月, 2017 1 次提交
-
-
由 David Herrmann 提交于
This adds the new getsockopt(2) option SO_PEERGROUPS on SOL_SOCKET to retrieve the auxiliary groups of the remote peer. It is designed to naturally extend SO_PEERCRED. That is, the underlying data is from the same credentials. Regarding its syntax, it is based on SO_PEERSEC. That is, if the provided buffer is too small, ERANGE is returned and @optlen is updated. Otherwise, the information is copied, @optlen is set to the actual size, and 0 is returned. While SO_PEERCRED (and thus `struct ucred') already returns the primary group, it lacks the auxiliary group vector. However, nearly all access controls (including kernel side VFS and SYSVIPC, but also user-space polkit, DBus, ...) consider the entire set of groups, rather than just the primary group. But this is currently not possible with pure SO_PEERCRED. Instead, user-space has to work around this and query the system database for the auxiliary groups of a UID retrieved via SO_PEERCRED. Unfortunately, there is no race-free way to query the auxiliary groups of the PID/UID retrieved via SO_PEERCRED. Hence, the current user-space solution is to use getgrouplist(3p), which itself falls back to NSS and whatever is configured in nsswitch.conf(3). This effectively checks which groups we *would* assign to the user if it logged in *now*. On normal systems it is as easy as reading /etc/group, but with NSS it can resort to quering network databases (eg., LDAP), using IPC or network communication. Long story short: Whenever we want to use auxiliary groups for access checks on IPC, we need further IPC to talk to the user/group databases, rather than just relying on SO_PEERCRED and the incoming socket. This is unfortunate, and might even result in dead-locks if the database query uses the same IPC as the original request. So far, those recursions / dead-locks have been avoided by using primitive IPC for all crucial NSS modules. However, we want to avoid re-inventing the wheel for each NSS module that might be involved in user/group queries. Hence, we would preferably make DBus (and other IPC that supports access-management based on groups) work without resorting to the user/group database. This new SO_PEERGROUPS ioctl would allow us to make dbus-daemon work without ever calling into NSS. Cc: Michal Sekletar <msekleta@redhat.com> Cc: Simon McVittie <simon.mcvittie@collabora.co.uk> Reviewed-by: NTom Gundersen <teg@jklm.no> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 6月, 2017 1 次提交
-
-
由 Nagarathnam Muthusamy 提交于
This patch adds the prototypes of assembly defined functions to asm-prototypes.h. Some prototypes are directly added as they are not present in any existing header files. Signed-off-by: NNagarathnam Muthusamy <nagarathnam.muthusamy@oracle.com> Reviewed-by: NBabu Moger <babu.moger@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2017 4 次提交
-
-
由 Pavel Tatashin 提交于
Add the new get_tick() function that is hot-patched during boot based on processor we are booting on. Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: NSteven Sistare <steven.sistare@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavel Tatashin 提交于
This patch prepares the code for early boot time stamps by making it more modular. - init_tick_ops() to initialize struct sparc64_tick_ops - new sparc64_tick_ops operation get_frequency() which returns a frequency Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: NBob Picco <bob.picco@oracle.com> Reviewed-by: NSteven Sistare <steven.sistare@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavel Tatashin 提交于
In clock sched we now have three loads: - Function pointer - quotient for multiplication - offset However, it is possible to improve performance substantially, by guaranteeing that all three loads are from the same cacheline. By moving these three values first in sparc64_tick_ops, and by having tick_operations 64-byte aligned we guarantee this. Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Reviewed-by: NSteven Sistare <steven.sistare@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pavel Tatashin 提交于
A few changes that were reported by checkpatch, removed all trailing white spaces in these two files. Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com> Reviewed-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 6月, 2017 1 次提交
-
-
由 Jag Raman 提交于
Add the following LDC APIs which are planned to be used by LDC clients in the future: - ldc_set_state: Sets given LDC channel to given state - ldc_mode: Returns the mode of given LDC channel - ldc_print: Prints info about given LDC channel - ldc_rx_reset: Reset the RX queue of given LDC channel Signed-off-by: NJagannathan Raman <jag.raman@oracle.com> Reviewed-by: NAaron Young <aaron.young@oracle.com> Reviewed-by: NAlexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: NBijan Mottahedeh <bijan.mottahedeh@oracle.com> Reviewed-by: NLiam Merwick <liam.merwick@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 6月, 2017 1 次提交
-
-
由 Aleksa Sarai 提交于
When opening the slave end of a PTY, it is not possible for userspace to safely ensure that /dev/pts/$num is actually a slave (in cases where the mount namespace in which devpts was mounted is controlled by an untrusted process). In addition, there are several unresolvable race conditions if userspace were to attempt to detect attacks through stat(2) and other similar methods [in addition it is not clear how userspace could detect attacks involving FUSE]. Resolve this by providing an interface for userpace to safely open the "peer" end of a PTY file descriptor by using the dentry cached by devpts. Since it is not possible to have an open master PTY without having its slave exposed in /dev/pts this interface is safe. This interface currently does not provide a way to get the master pty (since it is not clear whether such an interface is safe or even useful). Cc: Christian Brauner <christian.brauner@ubuntu.com> Cc: Valentin Rothberg <vrothberg@suse.com> Signed-off-by: NAleksa Sarai <asarai@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-