- 23 7月, 2010 2 次提交
-
-
由 Sheng Yang 提交于
Set the callback to receive evtchns from Xen, using the callback vector delivery mechanism. The traditional way for receiving event channel notifications from Xen is via the interrupts from the platform PCI device. The callback vector is a newer alternative that allow us to receive notifications on any vcpu and doesn't need any PCI support: we allocate a vector exclusively to receive events, in the vector handler we don't need to interact with the vlapic, therefore we avoid a VMEXIT. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 09 7月, 2010 1 次提交
-
-
由 Andy Walls 提交于
Signed-off-by: NAndy Walls <awalls@md.metrocast.net> Signed-off-by: NMauro Carvalho Chehab <mchehab@redhat.com>
-
- 07 7月, 2010 2 次提交
-
-
由 Francisco Jerez 提交于
Repeated ttm_page_alloc_init/fini fails noisily because the pool manager kobj isn't zeroed out between uses (we could do just that but statically allocated kobjects are generally considered a bad thing). Move it to kzalloc'ed memory. Note that this patch drops the refcounting behavior of the pool allocator init/fini functions: it would have led to a race condition in its current form, and anyway it was never exploited. This fixes a regression with reloading kms modules at runtime, since page allocator was introduced. Signed-off-by: NFrancisco Jerez <currojerez@riseup.net> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Artem Bityutskiy 提交于
This patch introduces 3 VFS accessors: 'sb_mark_dirty()', 'sb_mark_clean()', and 'sb_is_dirty()'. They simply set 'sb->s_dirt' or test 'sb->s_dirt'. The plan is to make every FS use these accessors later instead of manipulating the 'sb->s_dirt' flag directly. Ultimately, this change is a preparation for the periodic superblock synchronization optimization which is about preventing the "sync_supers" kernel thread from waking up even if there is nothing to synchronize. This patch does not do any functional change, just adds accessor functions. Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 7月, 2010 4 次提交
-
-
由 Christoph Hellwig 提交于
First remove items from work_list as soon as we start working on them. This means we don't have to track any pending or visited state and can get rid of all the RCU magic freeing the work items - we can simply free them once the operation has finished. Second use a real completion for tracking synchronous requests - if the caller sets the completion pointer we complete it, otherwise use it as a boolean indicator that we can free the work item directly. Third unify struct wb_writeback_args and struct bdi_work into a single data structure, wb_writeback_work. Previous we set all parameters into a struct wb_writeback_args, copied it into struct bdi_work, copied it again on the stack to use it there. Instead of just allocate one structure dynamically or on the stack and use it all the way through the stack. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
The case where we have a superblock doesn't require a loop here as we scan over all inodes in writeback_sb_inodes. Split it out into a separate helper to make the code simpler. This also allows to get rid of the sb member in struct writeback_control, which was rather out of place there. Also update the comments in writeback_sb_inodes that explain the handling of inodes from wrong superblocks. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
This was just an odd wrapper around writeback_inodes_wb. Removing this also allows to get rid of the bdi member of struct writeback_control which was rather out of place there. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Ben Hutchings 提交于
netif_vdbg() was originally defined as entirely equivalent to netdev_vdbg(), but I assume that it was intended to take the same parameters as netif_dbg() etc. (Currently it is only used by the sfc driver, in which I worked on that assumption.) In commit a4ed89cb I changed the definition used when VERBOSE_DEBUG is not defined, but I failed to notice that the definition used when VERBOSE_DEBUG is defined was also not as I expected. Change that to match netif_dbg() as well. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 7月, 2010 2 次提交
-
-
由 Peter Zijlstra 提交于
Reimplement augmented RB-trees without sprinkling extra branches all over the RB-tree code (which lives in the scheduler hot path). This approach is 'borrowed' from Fabio's BFQ implementation and relies on traversing the rebalance path after the RB-tree-op to correct the heap property for insertion/removal and make up for the damage done by the tree rotations. For insertion the rebalance path is trivially that from the new node upwards to the root, for removal it is that from the deepest node in the path from the to be removed node that will still be around after the removal. [ This patch also fixes a video driver regression reported by Ali Gholami Rudi - the memtype->subtree_max_end was updated incorrectly. ] Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Acked-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Tested-by: NAli Gholami Rudi <ali@rudi.ir> Cc: Fabio Checconi <fabio@gandalf.sssup.it> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <1275414172.27810.27961.camel@twins> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yehuda Sadeh 提交于
We should initialize the module dynamic debug datastructures only after determining that the module is not loaded yet. This fixes a bug that introduced in 2.6.35-rc2, where when a trying to load a module twice, we also load it's dynamic printing data twice which causes all sorts of nasty issues. Also handle the dynamic debug cleanup later on failure. Signed-off-by: NYehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (removed a #ifdef) Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 7月, 2010 3 次提交
-
-
由 Randy Dunlap 提交于
Fix kernel-doc warnings in linux/net.h: Warning(include/linux/net.h:151): No description found for parameter 'wq' Warning(include/linux/net.h:151): Excess struct/union/enum/typedef member 'fasync_list' description in 'socket' Warning(include/linux/net.h:151): Excess struct/union/enum/typedef member 'wait' description in 'socket' Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
Reducing real_num_queues needs to flush the qdisc otherwise skbs with queue_mappings greater then real_num_tx_queues can be sent to the underlying driver. The flow for this is, dev_queue_xmit() dev_pick_tx() skb_tx_hash() => hash using real_num_tx_queues skb_set_queue_mapping() ... qdisc_enqueue_root() => enqueue skb on txq from hash ... dev->real_num_tx_queues -= n ... sch_direct_xmit() dev_hard_start_xmit() ndo_start_xmit(skb,dev) => skb queue set with old hash skbs are enqueued on the qdisc with skb->queue_mapping set 0 < queue_mappings < real_num_tx_queues. When the driver decreases real_num_tx_queues skb's may be dequeued from the qdisc with a queue_mapping greater then real_num_tx_queues. This fixes a case in ixgbe where this was occurring with DCB and FCoE. Because the driver is using queue_mapping to map skbs to tx descriptor rings we can potentially map skbs to rings that no longer exist. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NRoss Brattain <ross.b.brattain@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
When calling qdisc_reset() the qdisc lock needs to be held. In this case there is at least one driver i4l which is using this without holding the lock. Add the locking here. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 7月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
For yet unknown reason, MCP89 on MBP 7,1 doesn't work w/ ahci under linux but the controller doesn't require explicit mode setting and works fine with ata_generic. Make ahci ignore the controller on MBP 7,1 and let ata_generic take it for now. Reported in bko#15923. https://bugzilla.kernel.org/show_bug.cgi?id=15923 NVIDIA is investigating why ahci mode doesn't work. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Peer Chen <pchen@nvidia.com> Cc: stable@kernel.org Reported-by: NAnders Østhus <grapz666@gmail.com> Reported-by: NAndreas Graf <andreas_graf@csgraf.de> Reported-by: NBenoit Gschwind <gschwind@gnu-log.net> Reported-by: NDamien Cassou <damien.cassou@gmail.com> Reported-by: tixetsal@juno.com Signed-off-by: NJeff Garzik <jgarzik@redhat.com>
-
- 01 7月, 2010 4 次提交
-
-
由 Peter Zijlstra 提交于
Commit 0224cf4c (sched: Intoduce get_cpu_iowait_time_us()) broke things by not making sure preemption was indeed disabled by the callers of nr_iowait_cpu() which took the iowait value of the current cpu. This resulted in a heap of preempt warnings. Cure this by making nr_iowait_cpu() take a cpu number and fix up the callers to pass in the right number. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maxim Levitsky <maximlevitsky@gmail.com> Cc: Len Brown <len.brown@intel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Jiri Slaby <jslaby@suse.cz> Cc: linux-pm@lists.linux-foundation.org LKML-Reference: <1277968037.1868.120.camel@laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Dave Airlie 提交于
When I added the flags I must have been using a 25 line terminal and missed the following flags. The collided with flag has one user in staging despite being in-tree for 5 years. I'm happy to push this via my drm tree unless someone really wants to do it. Signed-off-by: NDave Airlie <airlied@redhat.com> Cc: stable@kernel.org
-
由 Saeed Bishara 提交于
Some controllers (KW, Dove) limits the TX IP/layer4 checksum offloading to a max size. Signed-off-by: NSaeed Bishara <saeed@marvell.com> Acked-by: NLennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andreas Steffen 提交于
Determine the size of the xfrm_mark struct, not of its pointer. Signed-off-by: NAndreas Steffen <andreas.steffen@strongswan.org> Acked-by: NJamal Hadi Salim <hadi@cyberus.ca> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2010 2 次提交
-
-
由 Mikael Pettersson 提交于
A __naked function is defined in C but with a body completely implemented by asm(), including any prologue and epilogue. These asm() bodies expect standard calling conventions for parameter passing. Older GCCs implement that correctly, but 4.[56] currently do not, see GCC PR44290. In the Linux kernel this breaks ARM, causing most arch/arm/mm/copypage-*.c modules to get miscompiled, resulting in kernel crashes during bootup. Part of the kernel fix is to augment the __naked function attribute to also imply noinline and noclone. This patch implements that, and has been verified to fix boot failures with gcc-4.5 compiled 2.6.34 and 2.6.35-rc1 kernels. The patch is a no-op with older GCCs. Signed-off-by: NMikael Pettersson <mikpe@it.uu.se> Signed-off-by: NKhem Raj <raj.khem@gmail.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 npiggin@suse.de 提交于
list_for_each_entry_safe is not suitable to protect against concurrent modification of the list. 6754af64 introduced a race in sb walking. list_for_each_entry can use the trick of pinning the current entry in the list before we drop and retake the lock because it subsequently follows cur->next. However list_for_each_entry_safe saves n=cur->next for following before entering the loop body, so when the lock is dropped, n may be deleted. Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: Christoph Hellwig <hch@infradead.org> Cc: John Stultz <johnstul@us.ibm.com> Cc: Frank Mayhar <fmayhar@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 6月, 2010 1 次提交
-
-
由 Ben Hutchings 提交于
struct ethtool_rxnfc was originally defined in 2.6.27 for the ETHTOOL_{G,S}RXFH command with only the cmd, flow_type and data fields. It was then extended in 2.6.30 to support various additional commands. These commands should have been defined to use a new structure, but it is too late to change that now. Since user-space may still be using the old structure definition for the ETHTOOL_{G,S}RXFH commands, and since they do not need the additional fields, only copy the originally defined fields to and from user-space. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Cc: stable@kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 6月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
commit aa2ea058 (tcp: fix outsegs stat for TSO segments) incorrectly assumed SNMP_ADD_STATS() was used from BH context. Fix this using mib[!in_softirq()] instead of mib[0] Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Tom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 6月, 2010 1 次提交
-
-
由 Wu Zhangjin 提交于
The header file include/linux/tracepoint.h may be included without include/linux/errno.h and then the compiler will fail on building for undelcared ENOSYS. This patch fixes this problem via including <linux/errno.h> to include/linux/tracepoint.h. Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> LKML-Reference: <1277118549-622-1-git-send-email-wuzhangjin@gmail.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 15 6月, 2010 1 次提交
-
-
由 Dave Airlie 提交于
Since the code that was too ugly to live is upstream, we can use it now, instead of rolling our own. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 14 6月, 2010 1 次提交
-
-
由 Philipp Reisner 提交于
This was a very hard to trigger race condition. If we got a state packet from the peer, after drbd_nl_disk() has already changed the disk state to D_NEGOTIATING but after_state_ch() was not yet run by the worker, then receive_state() might called drbd_sync_handshake(), which in turn crashed when accessing p_uuid. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com>
-
- 12 6月, 2010 4 次提交
-
-
由 Matthew Garrett 提交于
This feature is optional and is enabled if the BIOS requests any Windows OSI strings. It can also be enabled by the host OS. Signed-off-by: NMatthew Garrett <mjg@redhat.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NLin Ming <ming.m.lin@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Bob Moore 提交于
Was incorrectly AE_WAKE_ONLY_GPE. Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NLin Ming <ming.m.lin@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Rafael J. Wysocki 提交于
ACPICA uses acpi_hw_write_gpe_enable_reg() to re-enable a GPE after an event signaled by it has been handled. However, this function writes the entire GPE enable mask to the GPE's enable register which may not be correct. Namely, if one of the other GPEs in the same register was previously enabled by acpi_enable_gpe() and subsequently disabled using acpi_set_gpe(), acpi_hw_write_gpe_enable_reg() will re-enable it along with the target GPE. To fix this issue rework acpi_hw_write_gpe_enable_reg() so that it calls acpi_hw_low_set_gpe() with a special action value, ACPI_GPE_COND_ENABLE, that will make it only enable the GPE if the corresponding bit in its register's enable_for_run mask is set. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Rafael J. Wysocki 提交于
ACPICA uses acpi_ev_enable_gpe() for enabling GPEs at the low level, which is incorrect, because this function only enables the GPE if the corresponding bit in its enable register's enable_for_run mask is set. This causes acpi_set_gpe() to work incorrectly if used for enabling GPEs that were not previously enabled with acpi_enable_gpe(). As a result, among other things, wakeup-only GPEs are never enabled by acpi_enable_wakeup_device(), so the devices that use them are unable to wake up the system. To fix this issue remove acpi_ev_enable_gpe() and its counterpart acpi_ev_disable_gpe() and replace acpi_hw_low_disable_gpe() with acpi_hw_low_set_gpe() that will be used instead to manipulate GPE enable bits at the low level. Make the users of acpi_ev_enable_gpe() and acpi_ev_disable_gpe() call acpi_hw_low_set_gpe() instead and make sure that GPE enable masks are only updated by acpi_enable_gpe() and acpi_disable_gpe() when GPE reference counters change from 0 to 1 and from 1 to 0, respectively. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 11 6月, 2010 2 次提交
-
-
由 Christoph Hellwig 提交于
bdi_start_writeback now never gets a superblock passed, so we can just remove that case. And to further untangle the code and flatten the call stack split it into two trivial helpers for it's two callers. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 John Fastabend 提交于
Currently, the accelerated receive path for VLAN's will drop packets if the real device is an inactive slave and is not one of the special pkts tested for in skb_bond_should_drop(). This behavior is different then the non-accelerated path and for pkts over a bonded vlan. For example, vlanx -> bond0 -> ethx will be dropped in the vlan path and not delivered to any packet handlers at all. However, bond0 -> vlanx -> ethx and bond0 -> ethx will be delivered to handlers that match the exact dev, because the VLAN path checks the real_dev which is not a slave and netif_recv_skb() doesn't drop frames but only delivers them to exact matches. This patch adds a sk_buff flag which is used for tagging skbs that would previously been dropped and allows the skb to continue to skb_netif_recv(). Here we add logic to check for the deliver_no_wcard flag and if it is set only deliver to handlers that match exactly. This makes both paths above consistent and gives pkt handlers a way to identify skbs that come from inactive slaves. Without this patch in some configurations skbs will be delivered to handlers with exact matches and in others be dropped out right in the vlan path. I have tested the following 4 configurations in failover modes and load balancing modes. # bond0 -> ethx # vlanx -> bond0 -> ethx # bond0 -> vlanx -> ethx # bond0 -> ethx | vlanx -> -- Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 6月, 2010 1 次提交
-
-
由 Matthew Garrett 提交于
Saving platform non-volatile state may be required for suspend to RAM as well as hibernation. Move it to more generic code. Signed-off-by: NMatthew Garrett <mjg@redhat.com> Acked-by: NRafael J. Wysocki <rjw@sisk.pl> Tested-by: NMaxim Levitsky <maximlevitsky@gmail.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 09 6月, 2010 4 次提交
-
-
由 Alan Cox 提交于
10, 233 is allocated officially to /dev/kmview which is shipping in Ubuntu and Debian distributions. vhost_net seem to have borrowed it without making a proper request and this causes regressions in the other distributions. vhost_net can use a dynamic minor so use that instead. Also update the file with a comment to try and avoid future misunderstandings. cc: stable@kernel.org Signed-off-by: NAlan Cox <device@lanana.org> [ We should have caught this before 2.6.34 got released. - Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Chinner 提交于
If a filesystem writes more than one page in ->writepage, write_cache_pages fails to notice this and continues to attempt writeback when wbc->nr_to_write has gone negative - this trace was captured from XFS: wbc_writeback_start: towrt=1024 wbc_writepage: towrt=1024 wbc_writepage: towrt=0 wbc_writepage: towrt=-1 wbc_writepage: towrt=-5 wbc_writepage: towrt=-21 wbc_writepage: towrt=-85 This has adverse effects on filesystem writeback behaviour. write_cache_pages() needs to terminate after a certain number of pages are written, not after a certain number of calls to ->writepage are made. This is a regression introduced by 17bc6c30 ("vfs: Add no_nrwrite_index_update writeback control flag"), but cannot be reverted directly due to subsequent bug fixes that have gone in on top of it. Signed-off-by: NDave Chinner <dchinner@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Oleg Nesterov 提交于
BUG: unable to handle kernel NULL pointer dereference at 0000000000000006 IP: [<ffffffff8107bd37>] ftrace_raw_event_signal_generate+0x87/0x140 TP_STORE_SIGINFO() forgets about SEND_SIG_FORCED, fix. We should probably export is_si_special() and change TP_STORE_SIGINFO() to use it in the longer term. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Acked-by: NRoland McGrath <roland@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jason Baron <jbaron@redhat.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> Cc: 2.6.33.x-2.6.34.x <stable@kernel.org> LKML-Reference: <20100603213409.GA8307@redhat.com> Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
-
由 Peter Zijlstra 提交于
PROVE_RCU has a few issues with the cpu_cgroup because the scheduler typically holds rq->lock around the css rcu derefs but the generic cgroup code doesn't (and can't) know about that lock. Provide means to add extra checks to the css dereference and use that in the scheduler to annotate its users. The addition of rq->lock to these checks is correct because the cgroup_subsys::attach() method takes the rq->lock for each task it moves, therefore by holding that lock, we ensure the task is pinned to the current cgroup and the RCU derefence is valid. That leaves one genuine race in __sched_setscheduler() where we used task_group() without holding any of the required locks and thus raced with the cgroup code. Solve this by moving the check under the appropriate lock. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 6月, 2010 2 次提交
-
-
由 Alex Deucher 提交于
This is needed to enable accel in the ddx. However, due to a bug in older versions of the ddx, it relies on accel being disabled in order to load properly on evergreen chips. To maintain compatility, we add a new get accel param and call that from the ddx. The old one always returns false for evergreen cards. [this fixes a regression with older userspaces on newer kernels]. Signed-off-by: NAlex Deucher <alexdeucher@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Tejun Heo 提交于
JMB362 is a new variant of jmicron controller which is similar to JMB360 but has two SATA ports instead of one. As there is no PATA port, single function AHCI mode can be used as in JMB360. Add pci quirk for JMB362. Signed-off-by: NTejun Heo <tj@kernel.org> Reported-by: NAries Lee <arieslee@jmicron.com> Cc: stable@kernel.org Signed-off-by: NJeff Garzik <jgarzik@redhat.com>
-
- 05 6月, 2010 1 次提交
-
-
由 Rusty Russell 提交于
These were placed in the header in ef665c1a to get the various SYSFS/MODULE config combintations to compile. That may have been necessary then, but it's not now. These functions are all local to module.c. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Randy Dunlap <randy.dunlap@oracle.com>
-