- 23 10月, 2007 21 次提交
-
-
由 Jes Sorensen 提交于
Move setup_regs() to lguest_arch_setup_regs() in i386_core.c given that this is very architecture specific. Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Jes Sorensen 提交于
Apply Clue 2x4 to lguest userland<->kernel handling code and the lguest launcher. Pointers are not to be passed in u32's! Basic rule of thumb: Anything passing u32's back and forth should be passing unsigned longs to be portable to 64 bit archs. For those who forgotten already, I repeat: NO POINTERS IN u32! Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Jes Sorensen 提交于
Clean up the hypercall code to make the code in hypercalls.c architecture independent. First process the common hypercalls and then call lguest_arch_do_hcall() if the call hasn't been handled. Rename struct hcall_ring to hcall_args. This patch requires the previous patch which reorganize the layout of struct lguest_regs on i386 so they match the layout of struct hcall_args. Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Currently we look at the "trapnum" to see if the Guest wants a hypercall. But once the hypercall is done we have to reset trapnum to a bogus value, otherwise if we exit to userspace and return, we'd run the same hypercall twice (that was a nasty bug to find!). This has two main effects: 1) When Jes's patch changes the hypercall args to be a generic "struct hcall_args" we simply change the type of "lg->hcall". It's set by arch code, so if it has to copy args or something it can do so, and point "hcall" into lg->arch somewhere. 2) Async hypercalls only get run when an actual hypercall is pending. This simplfies the code a little and is a more logical semantic. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Jes Sorensen 提交于
Move eax next to ebx/ecx/edx in struct lguest_regs on i386, so they will be located together and allow it to map directly to a struct hcall_ring entry (which will be renamed struct hcall_args as in a subsequent patch). This is in preparation for making the code hcall code architecture independent. Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Jes Sorensen 提交于
Separate i386 architecture specific from core.c and move it to x86/core.c and add x86/lguest.h header file to match. Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
This simplifies the code a little, in preparation for allowing alternate system call vectors in guests (Plan 9 uses 0x40). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Back when we had all the Guest state in the switcher, we had a fixed array of them. This is no longer necessary. If we switch the network code to using random_ether_addr (46 bits is enough to avoid clashes), we can get rid of the concept of "guest id" altogether. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
In order to avoid problematic special linking of the Launcher, we give the Host an offset: this means we can use any memory region in the Launcher as Guest memory rather than insisting on mmap() at 0. The result is quite pleasing: a number of casts are replaced with simple additions. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
lguest uses a "switcher" shim mapped high to bounce between host and guest. As lguest becomes less i386-centric, we separate this code into a subdir. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Lguest has two sides: host support (to launch guests) and guest support (replacement boot path and paravirt_ops). This moves the guest side to arch/x86/lguest where it's closer to related code. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Andi Kleen <ak@suse.de>
-
由 Tony Breeds 提交于
Currently lguest will spend a lot of of time waking up the host, as it cannot go tickless (if the [host] TSC has been marked unstable). On my laptop I was getting ~40% of wakeups from lguest. With this patch applied, my laptop is much happier! Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Jes Sorensen 提交于
Use copy_to_user() when copying a struct timespec to the guest - put_user() cannot handle two long's in one go on a 64bit arch. Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Jes Sorensen <jes@sgi.com> Cc: Al Viro <viro@ftp.linux.org.uk>
-
由 Rusty Russell 提交于
It wasn't needed since a very early prototype of lguest. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Move lguest under the virtualization menu. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Avi Kivity <avi@qumranet.com>
-
由 Rusty Russell 提交于
1) Group all the "guest OS" support options together, under a PARAVIRT_GUEST menu. 2) Make those options select CONFIG_PARAVIRT, as suggested by Andi. 3) Make kconfig help titles consistent. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Andi Kleen <ak@suse.de> Cc: Zach Amsden <zach@vmware.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Chris Wright <chrisw@sous-sol.org>
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Stefan Richter 提交于
New warning since commit ab88ca48, "firewire: fw-ohci: missing dma_unmap_single": drivers/firewire/fw-ohci.c: In function 'at_context_transmit': drivers/firewire/fw-ohci.c:609: warning: 'payload_bus' may be used uninitialized in this function Access to payload_bus is conditional on packet->payload_length > 0, and that won't change while in at_context_queue_packet. Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
-
由 Stefan Richter 提交于
because there seems to be more time needed to implement this. Also, change related error return values to more appropriate ones. Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
-
- 22 10月, 2007 19 次提交
-
-
由 Laurent Vivier 提交于
In kvm_flush_remote_tlbs(), replace a loop using smp_call_function_single() by a single call to smp_call_function_mask() (which is new for x86_64). Signed-off-by: NLaurent Vivier <Laurent.Vivier@bull.net> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 FUJITA Tomonori 提交于
x86_64 defines ARCH_HAS_SG_CHAIN. So if IOMMU implementations don't support sg chaining, we will get data corruption. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
pci_dev's->sysdata is highly overloaded and currently IOMMU is broken due to IOMMU code depending on this field. This patch introduces new field in pci_dev's dev.archdata struct to hold IOMMU specific per device IOMMU private data. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greg KH <greg@kroah.com> Cc: Jeff Garzik <jeff@garzik.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
This patch adds PageSelectiveInvalidation support replacing existing DomainSelectiveInvalidation for intel_{map/unmap}_sg() calls and also enables to mapping one big contiguous DMA virtual address which is mapped to discontiguous physical address for SG map/unmap calls. "Doamin selective invalidations" wipes out the IOMMU address translation cache based on domain ID where as "Page selective invalidations" wipes out the IOMMU address translation cache for that address mask range which is more cache friendly when compared to Domain selective invalidations. Here is how it is done. 1) changes to iova.c alloc_iova() now takes a bool size_aligned argument, which when when set, returns the io virtual address that is naturally aligned to 2 ^ x, where x is the order of the size requested. Returning this io vitual address which is naturally aligned helps iommu to do the "page selective invalidations" which is IOMMU cache friendly over "domain selective invalidations". 2) Changes to driver/pci/intel-iommu.c Clean up intel_{map/unmap}_{single/sg} () calls so that s/g map/unamp calls is no more dependent on intel_{map/unmap}_single() intel_map_sg() now computes the total DMA virtual address required and allocates the size aligned total DMA virtual address and maps the discontiguous physical address to the allocated contiguous DMA virtual address. In the intel_unmap_sg() case since the DMA virtual address is contiguous and size_aligned, PageSelectiveInvalidation is used replacing earlier DomainSelectiveInvalidations. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Greg KH <greg@kroah.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Suresh B <suresh.b.siddha@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Arjan van de Ven <arjan@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
This config option (DMAR_FLPY_WA) sets up 1:1 mapping for the floppy device so that the floppy device which does not use DMA api's will continue to work. Once the floppy driver starts using DMA api's this config option can be turn off or this patch can be yanked out of kernel at that time. [akpm@linux-foundation.org: cleanups, rename things, build fix] [jengelh@computergmbh.de: Kconfig fixes] Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NJan Engelhardt <jengelh@gmx.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
When we fix all the opensource gfx drivers to use the DMA api's, at that time we can yank this config options out. [jengelh@computergmbh.de: Kconfig fixes] Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NJan Engelhardt <jengelh@gmx.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
MSI interrupt handler registrations and fault handling support for Intel-IOMMU hadrware. This patch enables the MSI interrupts for the DMA remapping units and in the interrupt handler read the fault cause and outputs the same on to the console. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
Introduce intel_iommu=forcedac commandline option. This option is helpful to verify the pci device capability of handling physical dma'able address greater than 4G. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
Intel IOMMU driver needs memory during DMA map calls to setup its internal page tables and for other data structures. As we all know that these DMA map calls are mostly called in the interrupt context or with the spinlock held by the upper level drivers(network/storage drivers), so in order to avoid any memory allocation failure due to low memory issues, this patch makes memory allocation by temporarily setting PF_MEMALLOC flags for the current task before making memory allocation calls. We evaluated mempools as a backup when kmem_cache_alloc() fails and found that mempools are really not useful here because 1) We don't know for sure how much to reserve in advance 2) And mempools are not useful for GFP_ATOMIC case (as we call memory alloc functions with GFP_ATOMIC) (akpm: point 2 is wrong...) With PF_MEMALLOC flag set in the current->flags, the VM subsystem avoids any watermark checks before allocating memory thus guarantee'ing the memory till the last free page. Further, looking at the code in mm/page_alloc.c in __alloc_pages() function, looks like this flag is useful only in the non-interrupt context. If we are in the interrupt context and memory allocation in IOMMU driver fails for some reason, then the DMA map api's will return failure and it is up to the higher level drivers to retry. Suppose, if upper level driver programs the controller with the buggy DMA virtual address, the IOMMU will block that DMA transaction when that happens thus preventing any corruption to main memory. So far in our test scenario, we were unable to create any memory allocation failure inside dma map api calls. Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
Actual intel IOMMU driver. Hardware spec can be found at: http://www.intel.com/technology/virtualization This driver sets X86_64 'dma_ops', so hook into standard DMA APIs. In this way, PCI driver will get virtual DMA address. This change is transparent to PCI drivers. [akpm@linux-foundation.org: remove unneeded cast] [akpm@linux-foundation.org: build fix] [bunk@stusta.de: fix duplicate CONFIG_DMAR Makefile line] Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
This code implements a generic IOVA allocation and management. As per Dave's suggestion we are now allocating IO virtual address from Higher DMA limit address rather than lower end address and this eliminated the need to preserve the IO virtual address for multiple devices sharing the same domain virtual address. Also this code uses red black trees to store the allocated and reserved iova nodes. This showed a good performance improvements over previous linear linked list. [akpm@linux-foundation.org: remove inlines] [akpm@linux-foundation.org: coding style fixes] Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
When devices are under a p2p bridge, upstream transactions get replaced by the device id of the bridge as it owns the PCIE transaction. Hence its necessary to setup translations on behalf of the bridge as well. Due to this limitation all devices under a p2p share the same domain in a DMAR. We just cache the type of device, if its a native PCIe device or not for later use. [akpm@linux-foundation.org: BUG_ON -> WARN_ON+recover] Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Keshavamurthy, Anil S 提交于
This patch supports the upcomming Intel IOMMU hardware a.k.a. Intel(R) Virtualization Technology for Directed I/O Architecture and the hardware spec for the same can be found here http://www.intel.com/technology/virtualization/index.htm FAQ! (questions from akpm, answers from ak) > So... what's all this code for? > > I assume that the intent here is to speed things up under Xen, etc? Yes in some cases, but not this code. That would be the Xen version of this code that could potentially assign whole devices to guests. I expect this to be only useful in some special cases though because most hardware is not virtualizable and you typically want an own instance for each guest. Ok at some point KVM might implement this too; i likely would use this code for this. > Do we > have any benchmark results to help us to decide whether a merge would be > justified? The main advantage for doing it in the normal kernel is not performance, but more safety. Broken devices won't be able to corrupt memory by doing random DMA. Unfortunately that doesn't work for graphics yet, for that need user space interfaces for the X server are needed. There are some potential performance benefits too: - When you have a device that cannot address the complete address range an IOMMU can remap its memory instead of bounce buffering. Remapping is likely cheaper than copying. - The IOMMU can merge sg lists into a single virtual block. This could potentially speed up SG IO when the device is slow walking SG lists. [I long ago benchmarked 5% on some block benchmark with an old MPT Fusion; but it probably depends a lot on the HBA] And you get better driver debugging because unexpected memory accesses from the devices will cause a trappable event. > > Does it slow anything down? It adds more overhead to each IO so yes. This patch: Add support for early detection and parsing of DMAR's (DMA Remapping) reported to OS via ACPI tables. DMA remapping(DMAR) devices support enables independent address translations for Direct Memory Access(DMA) from Devices. These DMA remapping devices are reported via ACPI tables and includes pci device scope covered by these DMA remapping device. For detailed info on the specification of "Intel(R) Virtualization Technology for Directed I/O Architecture" please see http://www.intel.com/technology/virtualization/index.htmSigned-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Muli Ben-Yehuda <muli@il.ibm.com> Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Christoph Lameter <clameter@sgi.com> Cc: Greg KH <greg@kroah.com> Cc: Len Brown <lenb@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yasunori Goto 提交于
Current memory notifier has some defects yet. (Fortunately, nothing uses it.) This patch is to fix and rearrange for them. - Add information of start_pfn, nr_pages, and node id if node status is changes from/to memoryless node for callback functions. Callbacks can't do anything without those information. - Add notification going-online status. It is necessary for creating per node structure before the node's pages are available. - Move GOING_OFFLINE status notification after page isolation. It is good place for return memory like cache for callback, because returned page is not used again. - Make CANCEL events for rollingback when error occurs. - Delete MEM_MAPPING_INVALID notification. It will be not used. - Fix compile error of (un)register_memory_notifier(). Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Matthias Schwarzott 提交于
Calling saa7134_ir_stop at suspend is no good idea for saa7134 cards without remote control. Signed-off-by: NMatthias Schwarzott <zzam@gentoo.org> Signed-off-by: NMauro Carvalho Chehab <mchehab@infradead.org>
-
由 Ian Armstrong 提交于
Due to changes in the core ivtv driver as of release 1.0, the osd_compat module option has been rendered obsolete. This patch removes the option and all code associated with it. Signed-off-by: NIan Armstrong <ian@iarmst.demon.co.uk> Signed-off-by: NHans Verkuil <hverkuil@xs4all.nl> Signed-off-by: NMauro Carvalho Chehab <mchehab@infradead.org>
-
由 Pedro 提交于
improve GoTView PCI7135 remote control working under linux. Acked-by: NHermann Pitton <hermann-pitton@arcor.de> Acked-by: NNickolay V. Shmyrev <nshmyrev@yandex.ru> Signed-off-by: NEugene M. Roginskii <roginovicci@nm.ru> Signed-off-by: NMauro Carvalho Chehab <mchehab@infradead.org>
-
由 Patrick Boettcher 提交于
As for most of the users the 1.10 firmware is an improvement we should use this firmware always now. Signed-off-by: NPatrick Boettcher <pb@linuxtv.org> Signed-off-by: NMauro Carvalho Chehab <mchehab@infradead.org>
-
由 Mike Isely 提交于
This is a minor change to help with tracking the viability of the encoder chip within the PVR USB2 device. Signed-off-by: NMike Isely <isely@pobox.com> Signed-off-by: NMauro Carvalho Chehab <mchehab@infradead.org>
-