- 24 8月, 2016 1 次提交
-
-
由 Steffen Klassert 提交于
An earlier patch accidentally replaced a write_lock_bh with a spin_unlock_bh. Fix this by using spin_lock_bh instead. Fixes: 9d0380df ("xfrm: policy: convert policy_lock to spinlock") Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
- 12 8月, 2016 8 次提交
-
-
由 Florian Westphal 提交于
After earlier patches conversions all spots acquire the writer lock and we can now convert this to a normal spinlock. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
It doesn't seem that important. We now get inconsistent view of the counters, but those are stale anyway right after we drop the lock. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
Don't acquire the readlock anymore and rely on rcu alone. In case writer on other CPU changed policy at the wrong moment (after we obtained sk policy pointer but before we could obtain the reference) just repeat the lookup. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
side effect: no longer disables BH (should be fine). Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
If we don't hold the policy lock anymore the refcnt might already be 0, i.e. policy struct is about to be free'd. Switch to atomic_inc_not_zero to avoid this. On removal policies are already unlinked from the tables (lists) before the last _put occurs so we are not supposed to find the same 'dead' entry on the next loop, so its safe to just repeat the lookup. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
Once xfrm_policy_lookup_bytype doesn't grab xfrm_policy_lock anymore its possible for a hash resize to occur in parallel. Use sequence counter to block lookup in case a resize is in progress and to also re-lookup in case hash table was altered in the mean time (might cause use to not find the best-match). Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
Since commit 56f04730 ("xfrm: add rcu grace period in xfrm_policy_destroy()") xfrm policy objects are already free'd via rcu. In order to make more places lockless (i.e. use rcu_read_lock instead of grabbing read-side of policy rwlock) we only need to: - use rcu_assign_pointer to store address of new hash table backend memory - add rcu barrier so that freeing of old memory is delayed (expansion and free happens from system workqueue, so synchronize_rcu is fine) - use rcu_dereference to fetch current address of the hash table. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
This is required once we allow lockless readers. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
- 10 8月, 2016 11 次提交
-
-
由 Florian Westphal 提交于
push the lock down, after earlier patches we can rely on rcu to make sure state struct won't go away. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
Before xfrm_state_find() can use rcu_read_lock instead of xfrm_state_lock we need to switch users of the hash table to assign/obtain the pointers with the appropriate rcu helpers. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
Once xfrm_state_find is lockless we have to cope with a concurrent resize opertion. We use a sequence counter to block in case a resize is in progress and to detect if we might have missed a state that got moved to a new hash table. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
The hash table backend memory and the state structs are free'd via kfree/vfree. Once we only rely on rcu during lookups we have to make sure no other cpu is currently accessing this before doing the free. Free operations already happen from worker so we can use synchronize_rcu to wait until concurrent readers are done. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
Once xfrm_state_lookup_byaddr no longer acquires the state lock another cpu might be freeing the state entry at the same time. To detect this we use atomic_inc_not_zero, we then signal -EAGAIN to caller in case our result was stale. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Florian Westphal 提交于
This is required once we allow lockless access of bydst/bysrc hash tables. Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Julia Lawall 提交于
The xfrm_replay structures are never modified, so declare them as const. Done with the help of Coccinelle. Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
由 Niklas Söderlund 提交于
The interface would not function after the system had been woken up after have been suspended (echo mem > /sys/power/state) cycle. The reason for this is that all device registers have been reset to its default values. This patch adds sleep suspend and resume functions that detached the interface at suspend and restore the registers and reattach the interface at resume. Only the registers that are only configured at probe time needs to be explicitly restored by the resume handler. All other registers are reconfigured by either reopening the device in the resume handler (if the device was running when the system was suspended) or when the interface is opened by a user at a later time. Signed-off-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julia Lawall 提交于
The b53_io_ops structures are never modified, so declare them as const. Done with the help of Coccinelle. Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
After commit 0ddcf43d ("ipv4: FIB Local/MAIN table collapse") fib_local is set but not used. Remove it. Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guillaume Nault 提交于
Userspace programs generally need to know the name of the ppp devices they create. Both ioctl and rtnl interfaces use the ppp<suffix> sheme to name them. But although the suffix used by the ioctl interface can be known by userspace (it's the PPP unit identifier returned by the PPPIOCGUNIT ioctl), the one used by the rtnl is only known by the kernel. This patch brings more consistency between ioctl and rtnl based ppp devices by generating device names using the PPP unit identifer as suffix in both cases. This way, userspace can always infer the name of the devices they create. Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 8月, 2016 20 次提交
-
-
由 Nicolas Iooss 提交于
This is helpful to detect at compile-time errors related to format strings. Signed-off-by: NNicolas Iooss <nicolas.iooss_linux@m4x.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Raju Lakkaraju 提交于
Hello, I added all review comments and re-sending for review. >From a5017f5878a92d2acec86a6a29b1498c457cb73a Mon Sep 17 00:00:00 2001 From: Nagaraju Lakkaraju <Raju.Lakkaraju@microsemi.com> Date: Wed, 3 Aug 2016 18:28:24 +0530 Subject: [PATCH v2] net: phy: Add drivers for Microsemi PHYs Signed-off-by: NNagaraju Lakkaraju <Raju.Lakkaraju@microsemi.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julia Lawall 提交于
Use of_property_read_bool to check for the existence of a property. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression e1,e2,x; @@ - if (of_get_property(e1,e2,NULL)) - x = true; - else - x = false; + x = of_property_read_bool(e1,e2); // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Haiyang Zhang 提交于
On Hyper-V host 2016 and later, VMs gets an event message of the physical link speed when vSwitch is changed. This patch handles this message, so the updated link speed can be reported by ethtool. Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Reviewed-by: NK. Y. Srinivasan <kys@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Haiyang Zhang 提交于
The physical link speed value will be reported by ethtool command. The real speed is available from Windows 2016 host or later. Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Reviewed-by: NK. Y. Srinivasan <kys@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grygorii Strashko 提交于
The struct cpdma_desc_pool->used_desc field can be safely removed from CPDMA driver (and hot patch) because used_descs counter is used just for pool consistency check at CPDMA deinitialization and now this check can be re-implemnted using gen_pool_size(pool->gen_pool) != gen_pool_avail(pool->gen_pool). More over, this will allow to get rid of warnings in cpdma_desc_pool_destro()-> WARN_ON(pool->used_desc) which may happen because the used_descs is used unprotected, since CPDMA has been switched to use genalloc, and may get wrong values on SMP. Hence, remove used_desc from struct cpdma_desc_pool. Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Soltys 提交于
The code using this variable has been commented out in the past as it was causing issues in upperlimited link-sharing scenarios. Signed-off-by: NMichal Soltys <soltys@ziu.info> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michal Soltys 提交于
This patch simplifies how we update fsc and calculate vt from it - while keeping the expected functionality identical with how hfsc behaves curently. It also fixes a certain issue introduced with a very old patch. The idea is, that instead of correcting cl_vt before fsc curve update (rtsc_min) and correcting cl_vt after calculation (rtsc_y2x) to keep cl_vt local to the current period - we can simply rely on virtual times and curve values always being in sync - analogously to how rsc and usc function, except that we use virtual time here. Why hasn't it been done since the beginning this way ? The likely scenario (basing on the code trying to correct curves whenever possible) was to keep the virtual times as small as possible - as they have tendency to "gallop" forward whenever their siblings and other fair sharing subtrees are idling. On top of that, current code is subtly bugged, so cumulative time (without any corrections) is always kept and used in init_vf() when a new backlog period begins (using cl_cvtoff). Is cumulative value safe ? Generally yes, though corner cases are easy to create. For example consider: 1gbit interface some 100kbit leaf, everything else idle With current tick (64ns) 1s is 15625000 ticks, but the leaf is alone and it's virtual time, so in reality it's 10000 times more. ITOW 38 bits are needed to hold 1 second. 54 - 1 day, 59 - 1 month, 63 - 1 year (all logarithms rounded up). It's getting somewhat dangerous, but also requires setup excusing this kind of values not mentioning permanently backlogged class for a year. In near most extreme case (10gbit, 10kbit leaf), we have "enough" to hold ~13.6 days in 64 bits. Well, the issue remains mostly theoretical and cl_cvtoff has been working fine for all those years. Sensible configuration are de-facto immune to this issue, and not so sensible can solve it with a cronjob and its period inversely proportional to the insanity of such setup =) Now let's explain the subtle bug mentioned earlier. The issue is related to how offsets are kept and how we calculate virtual times and update fair service curve(s). The issue itself is subtle, but easy to observe with long m1 segments. It was introduced in rather old patch: Commit 99296150c7: "[NET_SCHED]: O(1) children vtoff adjustment in HFSC scheduler" (available in git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git) Originally when a new backlog period was started, cl_vtoff of each sibling was updated with cl_cvtmax from past period - naturally moving all cl_vt to proper starting point. That patch adjusted it so cumulative offset is kept in the parent, and there is no need for traversing the list (as any subsequent child activation derives new vt from already active sibling(s)). But with this change, cl_vtoff (of each sibling) is no longer persistent across the inactivity periods, as it's calculated from parent's cl_cvtoff on a new backlog period, conflicting with the following curve correction from the previous period: if (cl->cl_virtual.x == vt) { cl->cl_virtual.x -= cl->cl_vtoff; cl->cl_vtoff = 0; } This essentially tries to keep curve as if it was local to the period and resets cl_vtoff (cumulative vt offset of the class) to 0 when possible (read: when we have an intersection or if a new curve is below the old one). But then it's recalculated from cl_cvtoff on next active period. Then rtsc_min() call preceding the above if() doesn't really do what we expect it to do in such scenario - as it calculates the minimum of corrected curve (from the previous backlog period) and the new uncorrected curve (with offset derived from cl_cvtoff). Example: tc class add dev $ife parent 1:0 classid 1:1 hfsc ls m2 100mbit ul m2 100mbit tc class add dev $ife parent 1:1 classid 1:10 hfsc ls m1 80mbit d 10s m2 20mbit tc class add dev $ife parent 1:1 classid 1:11 hfsc ls m2 20mbit start B, keep it backlogged, let it run 6s (30s worth of vt as A is idle) pause B briefly to force cl_cvtoff update in parent (whole 1:1 going idle) start A, let it run 10s pause A briefly to force rtsc_min() At this point we would expect A to continue at 20mbit after a brief moment of 80mbit. But instead A will use 80mbit for full 10s again. It's the effect of first correcting A (during 'start A'), and then - after unpausing - calculating rtsc_min() from old corrected and new uncorrected curve. The patch fixes this bug and keepis vt and fsc in sync (virtual times are cumulative, not local to the backlog period). Signed-off-by: NMichal Soltys <soltys@ziu.info> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wei Yongjun 提交于
spinlock can be initialized automatically with DEFINE_SPINLOCK() rather than explicitly calling spin_lock_init(). Signed-off-by: NWei Yongjun <weiyj.lk@gmail.com> Acked-by: NYuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hangbin Liu 提交于
Based on RFC3376 5.1 and RFC3810 6.1 If the per-interface listening change that triggers the new report is a filter mode change, then the next [Robustness Variable] State Change Reports will include a Filter Mode Change Record. This applies even if any number of source list changes occur in that period. Old State New State State Change Record Sent --------- --------- ------------------------ INCLUDE (A) EXCLUDE (B) TO_EX (B) EXCLUDE (A) INCLUDE (B) TO_IN (B) So we should not send source-list change if there is a filter-mode change. Here are two scenarios: 1. Group deleted and filter mode is EXCLUDE, which means we need send a TO_IN { }. 2. Not group deleted, but has pcm->crcount, which means we need send a normal filter-mode-change. At the same time, if the type is ALLOW or BLOCK, and have psf->sf_crcount, we stop add records and decrease sf_crcount directly Reference: https://www.ietf.org/mail-archive/web/magma/current/msg01274.htmlSigned-off-by: NHangbin Liu <liuhangbin@gmail.com> Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The ethtool api {get|set}_settings is deprecated. We move the mvneta driver to new api {get|set}_link_ksettings. We use the generic function phy_ethtool_get_link_ksettings, and update old mvneta_ethtool_set_settings to the new api. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The private structure contain a pointer to phydev, but the structure net_device already contain such pointer. So we can remove the pointer phy_dev in the private structure, and update the driver to use the one contained in struct net_device. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
There are two generics functions phy_ethtool_{get|set}_link_ksettings, so we can use them instead of defining the same code in the driver. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The private structure contain a pointer to phydev, but the structure net_device already contain such pointer. So we can remove the pointer phy in the private structure, and update the driver to use the one contained in struct net_device. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
There are two generics functions phy_ethtool_{get|set}_link_ksettings, so we can use them instead of defining the same code in the driver. There was a check on CAP_NET_ADMIN in cvm_oct_set_settings, but this check is already done in dev_ethtool, so no need to repeat it before calling the generic function. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The private structure contain a pointer to phydev, but the structure net_device already contain such pointer. So we can remove the pointer phydev in the private structure, and update the driver to use the one contained in struct net_device. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Ivan Vecera says: ==================== bna: remove useless global variables The set removes useless global bnad_list as well as bnad->entry that track a list of driver instances but it is not used anywhere. The associated bnad_list_mutex is removed as well but as it is also used to protect bna_id increment it is necessary to convert bna_id to atomic_t. ==================== Signed-off-by: NIvan Vecera <ivecera@redhat.com>
-
由 Ivan Vecera 提交于
Remove global bnad_list_mutex as it is not used anymore. This makes bnad_add_to_list() and bnad_remove_from_list() empty so remove them too. Signed-off-by: NIvan Vecera <ivecera@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Vecera 提交于
Change type of bna_id to atomic_t. The bnad_list_mutex is used to prevent a race when bna_id is incremented. After the change the mutex can be removed in the next step. Signed-off-by: NIvan Vecera <ivecera@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Vecera 提交于
Remove global variable bnad_list and bnad->list_entry that are used as list of bna driver instances. It is not necessary and useless. Signed-off-by: NIvan Vecera <ivecera@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-