- 04 2月, 2015 1 次提交
-
-
由 Ulf Hansson 提交于
The reference counting was needed when genpd supported PM domain device callbacks. Since this option has been removed, let's also remove the reference counting of the struct generic_pm_domain_data. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 24 1月, 2015 1 次提交
-
-
由 Ulf Hansson 提交于
There are currently no users of this API, let's remove it. Additionally, if such feature would be needed future wise, a better option is likely use pm_runtime_set_active|suspended() in some form. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 1月, 2015 2 次提交
-
-
由 Thomas Graf 提交于
User space is currently sending a OVS_FLOW_ATTR_PROBE for both flow and packet messages. This leads to an out-of-bounds access in ovs_packet_cmd_execute() because OVS_FLOW_ATTR_PROBE > OVS_PACKET_ATTR_MAX. Introduce a new OVS_PACKET_ATTR_PROBE with the same numeric value as OVS_FLOW_ATTR_PROBE to grow the range of accepted packet attributes while maintaining to be binary compatible with existing OVS binaries. Fixes: 05da5898 ("openvswitch: Add support for OVS_FLOW_ATTR_PROBE.") Reported-by: NSander Eikelenboom <linux@eikelenboom.it> Tracked-down-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NThomas Graf <tgraf@suug.ch> Reviewed-by: NJesse Gross <jesse@nicira.com> Acked-by: NPravin B Shelar <pshelar@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin Poirier 提交于
For example, one could conceivably call for_each_netdev_in_bond_rcu(condition ? bond1 : bond2, slave) and get an unexpected result. Signed-off-by: NBenjamin Poirier <bpoirier@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 1月, 2015 2 次提交
-
-
由 Christian Borntraeger 提交于
Feedback has shown that WRITE_ONCE(x, val) is easier to use than ASSIGN_ONCE(val,x). There are no in-tree users yet, so lets change it for 3.19. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NDavidlohr Bueso <dave@stgolabs.net> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 B Viswanath 提交于
net: Corrected the comment describing the ndo operations to reflect the actual prototype for couple of operations Corrected the comment describing the ndo operations to reflect the actual prototype for couple of operations Signed-off-by: NB Viswanath <marichika4@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 1月, 2015 2 次提交
-
-
由 Jan Beulich 提交于
Using the native code here can't work properly, as the hypervisor would normally have cleared the two reason bits by the time Dom0 gets to see the NMI (if passed to it at all). There's a shared info field for this, and there's an existing hook to use - just fit the two together. This is particularly relevant so that NMIs intended to be handled by APEI / GHES actually make it to the respective handler. Note that the hook can (and should) be used irrespective of whether being in Dom0, as accessing port 0x61 in a DomU would be even worse, while the shared info field would just hold zero all the time. Note further that hardware NMI handling for PVH doesn't currently work anyway due to missing code in the hypervisor (but it is expected to work the native rather than the PV way). Signed-off-by: NJan Beulich <jbeulich@suse.com> Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Will Deacon 提交于
When batching up address ranges for TLB invalidation, we check tlb->end != 0 to indicate that some pages have actually been unmapped. As of commit f045bbb9 ("mmu_gather: fix over-eager tlb_flush_mmu_free() calling"), we use the same check for freeing these pages in order to avoid a performance regression where we call free_pages_and_swap_cache even when no pages are actually queued up. Unfortunately, the range could have been reset (tlb->end = 0) by tlb_end_vma, which has been shown to cause memory leaks on arm64. Furthermore, investigation into these leaks revealed that the fullmm case on task exit no longer invalidates the TLB, by virtue of tlb->end == 0 (in 3.18, need_flush would have been set). This patch resolves the problem by reverting commit f045bbb9, using instead tlb->local.nr as the predicate for page freeing in tlb_flush_mmu_free and ensuring that tlb->end is initialised to a non-zero value in the fullmm case. Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NDave Hansen <dave@sr71.net> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 1月, 2015 1 次提交
-
-
由 Adrian Hunter 提交于
Re-tuning for HS400 mode must be done in HS200 mode. Currently there is no support for that. That needs to be reflected in the code. Specifically, if tuning is executed in HS400 mode then return an error, and do not start the tuning timer if HS200 tuning is being done prior to switching to HS400. Note that periodic re-tuning is not expected to be needed for HS400 but re-tuning is still needed after the host controller has lost power. In the case of suspend/resume that is not necessary because the card is fully re-initialised. That just leaves runtime suspend/resume with no support for HS400 re-tuning. Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 10 1月, 2015 1 次提交
-
-
由 Nicholas Bellinger 提交于
Now that fabric_max_sectors is no longer used to enforce the maximum I/O size, go ahead and drop it's left-over usage in target-core and associated backend drivers. Cc: Christoph Hellwig <hch@lst.de> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Roland Dreier <roland@purestorage.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
- 09 1月, 2015 5 次提交
-
-
由 Andy Lutomirski 提交于
On x86_64, at least, task_pt_regs may be only partially initialized in many contexts, so x86_64 should not use it without extra care from interrupt context, let alone NMI context. This will allow x86_64 to override the logic and will supply some scratch space to use to make a cleaner copy of user regs. Tested-by: NJiri Olsa <jolsa@kernel.org> Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: chenggang.qcg@taobao.com Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Salter <msalter@redhat.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/e431cd4c18c2e1c44c774f10758527fb2d1025c4.1420396372.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 David Drysdale 提交于
Fix clashing values for O_PATH and FMODE_NONOTIFY on sparc. The clashing O_PATH value was added in commit 5229645b ("vfs: add nonconflicting values for O_PATH") but this can't be changed as it is user-visible. FMODE_NONOTIFY is only used internally in the kernel, but it is in the same numbering space as the other O_* flags, as indicated by the comment at the top of include/uapi/asm-generic/fcntl.h (and its use in fs/notify/fanotify/fanotify_user.c). So renumber it to avoid the clash. All of this has happened before (commit 12ed2e36: "fanotify: FMODE_NONOTIFY and __O_SYNC in sparc conflict"), and all of this will happen again -- so update the uniqueness check in fcntl_init() to include __FMODE_NONOTIFY. Signed-off-by: NDavid Drysdale <drysdale@google.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NJan Kara <jack@suse.cz> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Eric Paris <eparis@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Tejun, while reviewing the code, spotted the following race condition between the dirtying and truncation of a page: __set_page_dirty_nobuffers() __delete_from_page_cache() if (TestSetPageDirty(page)) page->mapping = NULL if (PageDirty()) dec_zone_page_state(page, NR_FILE_DIRTY); dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE); if (page->mapping) account_page_dirtied(page) __inc_zone_page_state(page, NR_FILE_DIRTY); __inc_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE); which results in an imbalance of NR_FILE_DIRTY and BDI_RECLAIMABLE. Dirtiers usually lock out truncation, either by holding the page lock directly, or in case of zap_pte_range(), by pinning the mapcount with the page table lock held. The notable exception to this rule, though, is do_wp_page(), for which this race exists. However, do_wp_page() already waits for a locked page to unlock before setting the dirty bit, in order to prevent a race where clear_page_dirty() misses the page bit in the presence of dirty ptes. Upgrade that wait to a fully locked set_page_dirty() to also cover the situation explained above. Afterwards, the code in set_page_dirty() dealing with a truncation race is no longer needed. Remove it. Reported-by: NTejun Heo <tj@kernel.org> Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: NJan Kara <jack@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Konstantin Khlebnikov 提交于
Constantly forking task causes unlimited grow of anon_vma chain. Each next child allocates new level of anon_vmas and links vma to all previous levels because pages might be inherited from any level. This patch adds heuristic which decides to reuse existing anon_vma instead of forking new one. It adds counter anon_vma->degree which counts linked vmas and directly descending anon_vmas and reuses anon_vma if counter is lower than two. As a result each anon_vma has either vma or at least two descending anon_vmas. In such trees half of nodes are leafs with alive vmas, thus count of anon_vmas is no more than two times bigger than count of vmas. This heuristic reuses anon_vmas as few as possible because each reuse adds false aliasing among vmas and rmap walker ought to scan more ptes when it searches where page is might be mapped. Link: http://lkml.kernel.org/r/20120816024610.GA5350@evergreen.ssec.wisc.edu Fixes: 5beb4930 ("mm: change anon_vma linking to fix multi-process server scalability issue") [akpm@linux-foundation.org: fix typo, per Rik] Signed-off-by: NKonstantin Khlebnikov <koct9i@gmail.com> Reported-by: NDaniel Forrest <dan.forrest@ssec.wisc.edu> Tested-by: NMichal Hocko <mhocko@suse.cz> Tested-by: NJerome Marchand <jmarchan@redhat.com> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> [2.6.34+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ilya Dryomov 提交于
The only real issue is the one in auth_x.c and it came with 3.19-rc1 merge. Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
-
- 08 1月, 2015 5 次提交
-
-
由 Keith Busch 提交于
Some types of requests may be started that are not gauranteed to ever complete. This adds a request flag that a driver can use so mark the request as such. Signed-off-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jens Axboe 提交于
Adds a helper function a driver can use to abort requeued requests in case any are pending when h/w queues are being removed. Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Keith Busch 提交于
Kicking requeued requests will start h/w queues in a work_queue, which may alter the driver's requested state to temporarily stop them. This patch exports a method to cancel the q->requeue_work so a driver can be assured stopped h/w queues won't be started up before it is ready. Signed-off-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Keith Busch 提交于
Drivers can iterate over all allocated request tags, but their callback needs a way to know if the driver started the request in the first place. Signed-off-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jens Axboe 提交于
We store it in the tag set, we don't need it in the hardware queue. While removing cmd_size, place ->queue_num further down to avoid a hole on 64-bit archs. It's not used in any fast paths, so we can safely move it. Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 07 1月, 2015 2 次提交
-
-
由 Linus Torvalds 提交于
Jay Foad reports that the address sanitizer test (asan) sometimes gets confused by a stack pointer that ends up being outside the stack vma that is reported by /proc/maps. This happens due to an interaction between RLIMIT_STACK and the guard page: when we do the guard page check, we ignore the potential error from the stack expansion, which effectively results in a missing guard page, since the expected stack expansion won't have been done. And since /proc/maps explicitly ignores the guard page (commit d7824370: "mm: fix up some user-visible effects of the stack guard page"), the stack pointer ends up being outside the reported stack area. This is the minimal patch: it just propagates the error. It also effectively makes the guard page part of the stack limit, which in turn measn that the actual real stack is one page less than the stack limit. Let's see if anybody notices. We could teach acct_stack_growth() to allow an extra page for a grow-up/grow-down stack in the rlimit test, but I don't want to add more complexity if it isn't needed. Reported-and-tested-by: NJay Foad <jay.foad@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Oded Gabbay 提交于
This patch reformats the ioctl definitions in kfd_ioctl.h to be similar to the drm ioctls definition style. v2: Renamed KFD_COMMAND_(START|END) to AMDKFD_... Signed-off-by: NOded Gabbay <oded.gabbay@amd.com> Acked-by: NChristian König <christian.koenig@amd.com>
-
- 06 1月, 2015 3 次提交
-
-
由 Trond Myklebust 提交于
Ensure that we cache the NFSv4/v4.1 client owner_id so that we can verify it when we're doing trunking detection. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
由 Hanjun Guo 提交于
acpi_map_lsapic() will allocate a logical CPU number and map it to physical CPU id (such as APIC id) for the hot-added CPU, it will also do some mapping for NUMA node id and etc, acpi_unmap_lsapic() will do the reverse. We can see that the name of the function is a little bit confusing and arch (IA64) dependent so rename them as acpi_(un)map_cpu() to make arch agnostic and explicit. Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Hanjun Guo 提交于
apic_id in MADT table is the CPU hardware id which identify it self in the system for x86 and ia64, OSPM will use it for SMP init to map APIC ID to logical cpu number in the early boot, when the DSDT/SSDT (ACPI namespace) is scanned later, the ACPI processor driver is probed and the driver will use acpi_id in DSDT to get the apic_id, then map to the logical cpu number which is needed by the processor driver. Before ACPI 5.0, only x86 and ia64 were supported in ACPI spec, so apic_id is used both in arch code and ACPI core which is pretty fine. Since ACPI 5.0, ARM is supported by ACPI and APIC is not available on ARM, this will confuse people when apic_id is both used by x86 and ARM in one function. So convert apic_id to phys_id (which is the original meaning) in ACPI processor dirver to make it arch agnostic, but leave the arch dependent code unchanged, no functional change. Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 1月, 2015 1 次提交
-
-
由 Johannes Berg 提交于
This reverts commit ca34e3b5. It turns out that the p54 and cw2100 drivers assume that there's tailroom even when they don't say they really need it. However, there's currently no way for them to explicitly say they do need it, so for now revert this. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=90331. Cc: stable@vger.kernel.org Fixes: ca34e3b5 ("mac80211: Fix accounting of the tailroom-needed counter") Reported-by: NChristopher Chavez <chrischavez@gmx.us> Bisected-by: NLarry Finger <Larry.Finger@lwfinger.net> Debugged-by: NChristian Lamparter <chunkeey@googlemail.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
- 30 12月, 2014 2 次提交
-
-
由 Lars-Peter Clausen 提交于
Fix a copy and paste error in the kernel doc description for the params_*() functions. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NTakashi Iwai <tiwai@suse.de>
-
由 Michal Hocko 提交于
Commit 2457aec6 ("mm: non-atomically mark page accessed during page cache allocation where possible") has added a separate parameter for specifying gfp mask for radix tree allocations. Not only this is less than optimal from the API point of view because it is error prone, it is also buggy currently because grab_cache_page_write_begin is using GFP_KERNEL for radix tree and if fgp_flags doesn't contain FGP_NOFS (mostly controlled by fs by AOP_FLAG_NOFS flag) but the mapping_gfp_mask has __GFP_FS cleared then the radix tree allocation wouldn't obey the restriction and might recurse into filesystem and cause deadlocks. This is the case for most filesystems unfortunately because only ext4 and gfs2 are using AOP_FLAG_NOFS. Let's simply remove radix_gfp_mask parameter because the allocation context is same for both page cache and for the radix tree. Just make sure that the radix tree gets only the sane subset of the mask (e.g. do not pass __GFP_WRITE). Long term it is more preferable to convert remaining users of AOP_FLAG_NOFS to use mapping_gfp_mask instead and simplify this interface even further. Reported-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NMichal Hocko <mhocko@suse.cz> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 12月, 2014 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Host needs to know vring element alignment requirements: simply doing alignof on structures doesn't work reliably: on some platforms gcc has alignof(uint32_t) == 2. Add macros for alignment as specified in virtio 1.0 cs01, export them to userspace as well. Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 27 12月, 2014 5 次提交
-
-
由 Johannes Berg 提交于
Netlink families can exist in multiple namespaces, and for the most part multicast subscriptions are per network namespace. Thus it only makes sense to have bind/unbind notifications per network namespace. To achieve this, pass the network namespace of a given client socket to the bind/unbind functions. Also do this in generic netlink, and there also make sure that any bind for multicast groups that only exist in init_net is rejected. This isn't really a problem if it is accepted since a client in a different namespace will never receive any notifications from such a group, but it can confuse the family if not rejected (it's also possible to silently (without telling the family) accept it, but it would also have to be ignored on unbind so families that take any kind of action on bind/unbind won't do unnecessary work for invalid clients like that. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
In order to make the newly fixed multicast bind/unbind functionality in generic netlink, pass them down to the appropriate family. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
There's no point to force the caller to know about the internal genl_sock to use inside struct net, just have them pass the network namespace. This doesn't really change code generation since it's an inline, but makes the caller less magic - there's never any reason to pass another socket. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesse Gross 提交于
GSO isn't the only offload feature with restrictions that potentially can't be expressed with the current features mechanism. Checksum is another although it's a general issue that could in theory apply to anything. Even if it may be possible to implement these restrictions in other ways, it can result in duplicate code or inefficient per-packet behavior. This generalizes ndo_gso_check so that drivers can remove any features that don't make sense for a given packet, similar to netif_skb_features(). It also converts existing driver restrictions to the new format, completing the work that was done to support tunnel protocols since the issues apply to checksums as well. By actually removing features from the set that are used to do offloading, it solves another problem with the existing interface. In these cases, GSO would run with the original set of features and not do anything because it appears that segmentation is not required. CC: Tom Herbert <therbert@google.com> CC: Joe Stringer <joestringer@nicira.com> CC: Eric Dumazet <edumazet@google.com> CC: Hayes Wang <hayeswang@realtek.com> Signed-off-by: NJesse Gross <jesse@nicira.com> Acked-by: NTom Herbert <therbert@google.com> Fixes: 04ffcb25 ("net: Add ndo_gso_check") Tested-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nicolas Dichtel 提交于
After commit d7480fd3 ("neigh: remove dynamic neigh table registration support"), this field is not used anymore. CC: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Acked-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 12月, 2014 2 次提交
-
-
由 Dave Airlie 提交于
This reverts commit 355a7018. This had some bad side effects under normal operation, and should have been dropped earlier. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Richard Guy Briggs 提交于
A regression was caused by commit 780a7654: audit: Make testing for a valid loginuid explicit. (which in turn attempted to fix a regression caused by e1760bd5) When audit_krule_to_data() fills in the rules to get a listing, there was a missing clause to convert back from AUDIT_LOGINUID_SET to AUDIT_LOGINUID. This broke userspace by not returning the same information that was sent and expected. The rule: auditctl -a exit,never -F auid=-1 gives: auditctl -l LIST_RULES: exit,never f24=0 syscall=all when it should give: LIST_RULES: exit,never auid=-1 (0xffffffff) syscall=all Tag it so that it is reported the same way it was set. Create a new private flags audit_krule field (pflags) to store it that won't interact with the public one from the API. Cc: stable@vger.kernel.org # v3.10-rc1+ Signed-off-by: NRichard Guy Briggs <rgb@redhat.com> Signed-off-by: NPaul Moore <pmoore@redhat.com>
-
- 23 12月, 2014 2 次提交
-
-
由 Vignesh R 提交于
Prior to DRA74x silicon rev 1.1, pcie_pcs register bits 8-15 and bits 16-23 were used to configure RC delay count for phy1 and phy2 respectively. phyid was used as index to distinguish the phys and to configure the delay values appropriately. As of DRA74x silicon rev 1.1, pcie_pcs register definition has changed. Bits 16-23 are used to configure delay values for *both* phy1 and phy2. Hence phyid is no longer required. So, drop id field from ti_pipe3 structure and its subsequent references for configuring pcie_pcs register. Also, pcie_pcs register now needs to be configured with delay value of 0x96 at bit positions 16-23. See register description of CTRL_CORE_PCIE_PCS in ARM572x TRM, SPRUHZ6, October 2014, section 18.5.2.2, table 18-1804. This is needed to ensure Gen2 cards are enumerated consistently. DRA72x silicon behaves same way as DRA74x rev 1.1 as far as this functionality is considered. Test results on DRA74x and DRA72x EVMs: Before patch ------------ DRA74x ES 1.0: Gen1 cards work, Gen2 cards do not work (expected result due to silicon errata) DRA74x ES 1.1: Gen1 cards work, Gen2 cards do not work sometimes due to incorrect programming of register DRA72x: Gen1 cards work, Gen2 cards do not work sometimes due to incorrect programming of register After patch ----------- DRA74x ES 1.0: Gen1 cards work, Gen2 cards do not work (expected result due to silicon errata) DRA74x ES 1.1: Gen1 cards work, Gen2 cards work consistently. DRA72x: Gen1 and Gen2 cards enumerate consistently. Signed-off-by: NVignesh R <vigneshr@ti.com> Signed-off-by: NKishon Vijay Abraham I <kishon@ti.com>
-
由 stephen hemminger 提交于
Resolve conflicts between glibc definition of IPV6 socket options and those defined in Linux headers. Looks like earlier efforts to solve this did not cover all the definitions. It resolves warnings during iproute2 build. Please consider for stable as well. Signed-off-by: NStephen Hemminger <stephen@networkplumber.org> Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 12月, 2014 1 次提交
-
-
由 Keith Busch 提交于
Let drivers prevent entering a queue that isn't available. Signed-off-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 20 12月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Having switched over all of the users of CONFIG_PM_RUNTIME to use CONFIG_PM directly, turn the latter into a user-selectable option and drop the former entirely from the tree. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org>
-