- 14 12月, 2006 31 次提交
-
-
由 Ralf Baechle 提交于
Virtually index, physically tagged cache architectures can get away without cache flushing when forking. This patch adds a new cache flushing function flush_cache_dup_mm(struct mm_struct *) which for the moment I've implemented to do the same thing on all architectures except on MIPS where it's a no-op. Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Atsushi Nemoto 提交于
Provide a custom copy_user_highpage() to deal with aliasing issues on MIPS. It uses kmap_coherent() to map an user page for kernel with same color. Rewrite copy_to_user_page() and copy_from_user_page() with the new interfaces to avoid extra cache flushing. The main part of this patch was originally written by Ralf Baechle; Atushi Nemoto did the the debugging. Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Atsushi Nemoto 提交于
To allow a more effective copy_user_highpage() on certain architectures, a vma argument is added to the function and cow_user_page() allowing the implementation of these functions to check for the VM_EXEC bit. The main part of this patch was originally written by Ralf Baechle; Atushi Nemoto did the the debugging. Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Atsushi Nemoto 提交于
Problem: 1. There is a process containing two thread (T1 and T2). The thread T1 calls fork(). Then dup_mmap() function called on T1 context. static inline int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) ... flush_cache_mm(current->mm); ... /* A */ (write-protect all Copy-On-Write pages) ... /* B */ flush_tlb_mm(current->mm); ... 2. When preemption happens between A and B (or on SMP kernel), the thread T2 can run and modify data on COW pages without page fault (modified data will stay in cache). 3. Some time after fork() completed, the thread T2 may cause a page fault by write-protect on a COW page. 4. Then data of the COW page will be copied to newly allocated physical page (copy_cow_page()). It reads data via kernel mapping. The kernel mapping can have different 'color' with user space mapping of the thread T2 (dcache aliasing). Therefore copy_cow_page() will copy stale data. Then the modified data in cache will be lost. In order to allow architecture code to deal with this problem allow architecture code to override copy_user_highpage() by defining __HAVE_ARCH_COPY_USER_HIGHPAGE in <asm/page.h>. The main part of this patch was originally written by Ralf Baechle; Atushi Nemoto did the the debugging. Signed-off-by: NAtsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Robert P. J. Day 提交于
Run this: #!/bin/sh for f in $(grep -Erl "\([^\)]*\) *k[cmz]alloc" *) ; do echo "De-casting $f..." perl -pi -e "s/ ?= ?\([^\)]*\) *(k[cmz]alloc) *\(/ = \1\(/" $f done And then go through and reinstate those cases where code is casting pointers to non-pointers. And then drop a few hunks which conflicted with outstanding work. Cc: Russell King <rmk@arm.linux.org.uk>, Ian Molton <spyro@f2s.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Greg KH <greg@kroah.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Paul Fulghum <paulkf@microgate.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Karsten Keil <kkeil@suse.de> Cc: Mauro Carvalho Chehab <mchehab@infradead.org> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Ian Kent <raven@themaw.net> Cc: Steven French <sfrench@us.ibm.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Neil Brown <neilb@cse.unsw.edu.au> Cc: Jaroslav Kysela <perex@suse.cz> Cc: Takashi Iwai <tiwai@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Helge Deller 提交于
Modify the sstfb (Voodoo1/2) driver: - fix a memleak when removing the sstfb module - fix sstfb to use the fbdev default videomode database - add module option "mode_option" to set initial screen mode - add sysfs-interface to turn VGA-passthrough on/off via /sys/class/graphics/fbX/vgapass - remove old debug functions from ioctl interface Signed-off-by: NHelge Deller <deller@gmx.de> Acked-By: NJames Simmons <jsimmons@infradead.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Geert Uytterhoeven 提交于
Remove references to non-existent fbmon_valid_timings() Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Cc: James Simmons <jsimmons@infradead.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 J.Bruce Fields 提交于
Define an op descriptor struct, use it to simplify nfsd4_proc_compound(). Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 J.Bruce Fields 提交于
Tuck away the replay_owner in the cstate while we're at it. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 J.Bruce Fields 提交于
Pass the saved and current filehandles together into all the nfsd4 compound operations. I want a unified interface to these operations so we can just call them by pointer and throw out the huge switch statement. Also I'll eventually want a structure like this--that holds the state used during compound processing--for deferral. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 J.Bruce Fields 提交于
A comment here incorrectly states that "slack_space" is measured in words, not bytes. Remove the comment, and adjust a variable name and a few comments to clarify the situation. This is pure cleanup; there should be no change in functionality. Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Adrian Bunk 提交于
The BLK_DEV_SWIM_IOP driver has: - already been marked as BROKEN in 2.6.0 three years ago and - is still marked as BROKEN. Drivers that had been marked as BROKEN for such a long time seem to be unlikely to be revived in the forseeable future. But if anyone wants to ever revive this driver, the code is still present in the older kernel releases. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Cc: Jens Axboe <jens.axboe@oracle.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric W. Biederman 提交于
This patch converts the tracking of the user space watchdog process from using a pid_t to use struct pid. This makes us safe from pid wrap around issues and prepares the way for the pid namespace. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Cc: Petr Vandrovec <VANDROVE@vc.cvut.cz> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric W. Biederman 提交于
smbfs keeps track of the user space server process in conn_pid. This converts that track to use a struct pid instead of pid_t. This keeps us safe from pid wrap around issues and prepares the way for the pid namespace. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric W. Biederman 提交于
Currently this driver tracks user space clients it should send signals to. In the presenct of file descriptor passing this is appears susceptible to confusion from pid wrap around issues. Replacing this with a struct pid prevents us from getting confused, and prepares for a pid namespace implementation. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Annotated, all places switched to keeping status net-endian. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Robert P. J. Day 提交于
All kcalloc() calls of the form "kcalloc(1,...)" are converted to the equivalent kzalloc() calls, and a few kcalloc() calls with the incorrect ordering of the first two arguments are fixed. Signed-off-by: NRobert P. J. Day <rpjday@mindspring.com> Cc: Jeff Garzik <jeff@garzik.org> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Adam Belay <ambx1@neo.rr.com> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Greg KH <greg@kroah.com> Cc: Mark Fasheh <mark.fasheh@oracle.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: Neil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
When we print an assert due to scheduling-in-atomic bugs, and if lockdep is enabled, then the IRQ tracing information of lockdep can be printed to pinpoint the code location that disabled interrupts. This saved me quite a bit of debugging time in cases where the backtrace did not identify the irq-disabling site well enough. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Ingo Molnar 提交于
Most distributions enable sysrq support but set it to 0 by default. Add a sysrq_always_enabled boot option to always-enable sysrq keys. Useful for debugging - without having to modify the disribution's config files (which might not be possible if the kernel is on a live CD, etc.). Also, while at it, clean up the sysrq interfaces. [bunk@stusta.de: make sysrq_always_enabled_setup() static] Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Chen, Kenneth W 提交于
Implement block device specific .direct_IO method instead of going through generic direct_io_worker for block device. direct_io_worker() is fairly complex because it needs to handle O_DIRECT on file system, where it needs to perform block allocation, hole detection, extents file on write, and tons of other corner cases. The end result is that it takes tons of CPU time to submit an I/O. For block device, the block allocation is much simpler and a tight triple loop can be written to iterate each iovec and each page within the iovec in order to construct/prepare bio structure and then subsequently submit it to the block layer. This significantly speeds up O_D on block device. [akpm@osdl.org: small speedup] Signed-off-by: NKen Chen <kenneth.w.chen@intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Zach Brown <zach.brown@oracle.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Valerie Henson 提交于
Add "relatime" (relative atime) support. Relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified. A corresponding patch against mount(8) is available at http://userweb.kernel.org/~akpm/mount-relative-atime.txtSigned-off-by: NValerie Henson <val_henson@linux.intel.com> Cc: Mark Fasheh <mark.fasheh@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Cc: Karel Zak <kzak@redhat.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Chris Zankel 提交于
The kernel termios (ktermios) changes were somehow missed for Xtensa. This patch adds the ktermios structure and also includes some minor file name fix that was missed in the syscall patch. Signed-off-by: NChris Zankel <chris@zankel.net> Acked-by: NAlan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Rafael J. Wysocki 提交于
Currently, to tell a task that it should go to the refrigerator, we set the PF_FREEZE flag for it and send a fake signal to it. Unfortunately there are two SMP-related problems with this approach. First, a task running on another CPU may be updating its flags while the freezer attempts to set PF_FREEZE for it and this may leave the task's flags in an inconsistent state. Second, there is a potential race between freeze_process() and refrigerator() in which freeze_process() running on one CPU is reading a task's PF_FREEZE flag while refrigerator() running on another CPU has just set PF_FROZEN for the same task and attempts to reset PF_FREEZE for it. If the refrigerator wins the race, freeze_process() will state that PF_FREEZE hasn't been set for the task and will set it unnecessarily, so the task will go to the refrigerator once again after it's been thawed. To solve first of these problems we need to stop using PF_FREEZE to tell tasks that they should go to the refrigerator. Instead, we can introduce a special TIF_*** flag and use it for this purpose, since it is allowed to change the other tasks' TIF_*** flags and there are special calls for it. To avoid the freeze_process()-refrigerator() race we can make freeze_process() to always check the task's PF_FROZEN flag after it's read its "freeze" flag. We should also make sure that refrigerator() will always reset the task's "freeze" flag after it's set PF_FROZEN for it. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NPavel Machek <pavel@ucw.cz> Cc: Russell King <rmk@arm.linux.org.uk> Cc: David Howells <dhowells@redhat.com> Cc: Andi Kleen <ak@muc.de> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric Dumazet 提交于
When some objects are allocated by one CPU but freed by another CPU we can consume lot of cycles doing divides in obj_to_index(). (Typical load on a dual processor machine where network interrupts are handled by one particular CPU (allocating skbufs), and the other CPU is running the application (consuming and freeing skbufs)) Here on one production server (dual-core AMD Opteron 285), I noticed this divide took 1.20 % of CPU_CLK_UNHALTED events in kernel. But Opteron are quite modern cpus and the divide is much more expensive on oldest architectures : On a 200 MHz sparcv9 machine, the division takes 64 cycles instead of 1 cycle for a multiply. Doing some math, we can use a reciprocal multiplication instead of a divide. If we want to compute V = (A / B) (A and B being u32 quantities) we can instead use : V = ((u64)A * RECIPROCAL(B)) >> 32 ; where RECIPROCAL(B) is precalculated to ((1LL << 32) + (B - 1)) / B Note : I wrote pure C code for clarity. gcc output for i386 is not optimal but acceptable : mull 0x14(%ebx) mov %edx,%eax // part of the >> 32 xor %edx,%edx // useless mov %eax,(%esp) // could be avoided mov %edx,0x4(%esp) // useless mov (%esp),%ebx [akpm@osdl.org: small cleanups] Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Cc: Christoph Lameter <clameter@sgi.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Paul Jackson 提交于
Elaborate the API for calling cpuset_zone_allowed(), so that users have to explicitly choose between the two variants: cpuset_zone_allowed_hardwall() cpuset_zone_allowed_softwall() Until now, whether or not you got the hardwall flavor depended solely on whether or not you or'd in the __GFP_HARDWALL gfp flag to the gfp_mask argument. If you didn't specify __GFP_HARDWALL, you implicitly got the softwall version. Unfortunately, this meant that users would end up with the softwall version without thinking about it. Since only the softwall version might sleep, this led to bugs with possible sleeping in interrupt context on more than one occassion. The hardwall version requires that the current tasks mems_allowed allows the node of the specified zone (or that you're in interrupt or that __GFP_THISNODE is set or that you're on a one cpuset system.) The softwall version, depending on the gfp_mask, might allow a node if it was allowed in the nearest enclusing cpuset marked mem_exclusive (which requires taking the cpuset lock 'callback_mutex' to evaluate.) This patch removes the cpuset_zone_allowed() call, and forces the caller to explicitly choose between the hardwall and the softwall case. If the caller wants the gfp_mask to determine this choice, they should (1) be sure they can sleep or that __GFP_HARDWALL is set, and (2) invoke the cpuset_zone_allowed_softwall() routine. This adds another 100 or 200 bytes to the kernel text space, due to the few lines of nearly duplicate code at the top of both cpuset_zone_allowed_* routines. It should save a few instructions executed for the calls that turned into calls of cpuset_zone_allowed_hardwall, thanks to not having to set (before the call) then check (within the call) the __GFP_HARDWALL flag. For the most critical call, from get_page_from_freelist(), the same instructions are executed as before -- the old cpuset_zone_allowed() routine it used to call is the same code as the cpuset_zone_allowed_softwall() routine that it calls now. Not a perfect win, but seems worth it, to reduce this chance of hitting a sleeping with irq off complaint again. Signed-off-by: NPaul Jackson <pj@sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
More cleanups for slab.h 1. Remove tabs from weird locations as suggested by Pekka 2. Drop the check for NUMA and SLAB_DEBUG from the fallback section as suggested by Pekka. 3. Uses static inline for the fallback defs as also suggested by Pekka. 4. Make kmem_ptr_valid take a const * argument. 5. Separate the NUMA fallback definitions from the kmalloc_track fallback definitions. Signed-off-by: NChristoph Lameter <clameter@sgi.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Christoph Lameter 提交于
This is a response to an earlier discussion on linux-mm about splitting slab.h components per allocator. Patch is against 2.6.19-git11. See http://marc.theaimsgroup.com/?l=linux-mm&m=116469577431008&w=2 This patch cleans up the slab header definitions. We define the common functions of slob and slab in slab.h and put the extra definitions needed for slab's kmalloc implementations in <linux/slab_def.h>. In order to get a greater set of common functions we add several empty functions to slob.c and also rename slob's kmalloc to __kmalloc. Slob does not need any special definitions since we introduce a fallback case. If there is no need for a slab implementation to provide its own kmalloc mess^H^H^Hacros then we simply fall back to __kmalloc functions. That is sufficient for SLOB. Sort the function in slab.h according to their functionality. First the functions operating on struct kmem_cache * then the kmalloc related functions followed by special debug and fallback definitions. Also redo a lot of comments. Signed-off-by: Christoph Lameter <clameter@sgi.com>? Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric Dumazet 提交于
Fields of struct pipe_buf_operations have not a precise layout (ie not optimized to fit cache lines nor reduce cache line ping pongs) The bufs[] array is *large* and is placed near the beginning of the structure, so all following fields have a large offset. This is unfortunate because many archs have smaller instructions when using small offsets relative to a base register. On x86 for example, 7 bits offsets have smaller instruction lengths. Moving bufs[] at the end of pipe_buf_operations permits all fields to have small offsets, and reduce text size, and icache pressure. # size vmlinux.pre vmlinux text data bss dec hex filename 3268989 664356 492196 4425541 438745 vmlinux.pre 3268765 664356 492196 4425317 438665 vmlinux So this patch reduces text size by 224 bytes on my x86_64 machine. Similar results on ia32. Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric W. Biederman 提交于
This reverts commit 373beb35. No one is using this identifier yet. The purpose of this identifier is to export nsproxy to user space which is wrong. nsproxy is an internal implementation optimization, which should keep our fork times from getting slower as we increase the number of global namespaces you don't have to share. Adding a global identifier like this is inappropriate because it makes namespaces inherently non-recursive, greatly limiting what we can do with them in the future. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Eric Dumazet 提交于
- pipe/splice should use const pipe_buf_operations and file_operations - struct pipe_inode_info has an unused field "start" : get rid of it. Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 13 12月, 2006 9 次提交
-
-
由 Dominik Brodowski 提交于
Support for Core CPUs was broken in two ways in speedstep-lib: for x86_64, we missed a MSR definition; for both x86_64 and i386, the FSB calculation was wrong by four (it's a quad-pumped bus). Also increase the accuracy of the calculation. Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: NDave Jones <davej@redhat.com>
-
由 Ralph Campbell 提交于
The QLogic InfiniPath HCAs use programmed I/O instead of HW DMA. This patch allows a verbs device driver to interpose on DMA mapping function calls in order to avoid relying on bus_to_virt() and phys_to_virt() to undo the mappings created by dma_map_single(), dma_map_sg(), etc. Signed-off-by: NRalph Campbell <ralph.campbell@qlogic.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Tony Luck 提交于
Because slot 1 of one instr bundle crosses border of two consecutive 8-bytes, kprobe on slot 1 is disabled. This patch enables kprobe on slot1, it only replaces higher 8-bytes of the instruction bundle and changes the exception code to ignore the low 12 bits of the break number (which is across the border in the lower 8-bytes of the bundle). For those instructions which must execute regardless qp bits, kprobe on slot 1 is still disabled. Signed-off-by: Nbibo,mao <bibo.mao@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Sean Hefty 提交于
Export the rdma cm interfaces to userspace via a misc device. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
Allow the use of UD QPs through the rdma_cm, in order to provide address translation services for resolving IB addresses for datagram messages using SIDR. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
During connection establishment, the passive side of a connection can receive messages from the active side before the connection event has been delivered to the user. Allow the passive side to send messages in response to received data before the event is delivered. To handle the case where the connection messages are lost, a new rdma_notify() function is added that users may invoke to force a connection into the established state. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
Connection information was never given to the recipient of a connection request or reply message. Only the event was delivered. Report the connection data with the event to allows user to reject the connection based on the requested parameters, or adjust their resources to match the request. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Sean Hefty 提交于
The qp_type parameter into the rdma_cm is unneeded, and can be misleading. The QP type should be determined from the port space. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Dean Nelson 提交于
This patch eliminates a potential deadlock that is possible when XPC disconnects a channel to a partition that has gone down. This deadlock will occur if at least one of the kthreads created by XPC for the purpose of making callouts to the channel's registerer is detained in the registerer and will not be returning back to XPC until some registerer request occurs on the now downed partition. The potential for a deadlock is removed by ensuring that there always is a kthread available to make the channel disconnecting callout to the registerer. Signed-off-by: NDean Nelson <dcn@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-