- 14 1月, 2015 13 次提交
-
-
由 Greg Rose 提交于
NPAR enabled partitions should warn the user when detected link speed is less than 10Gpbs. Change-ID: I7728bb8ce279bf0f4f755d78d7071074a4eb5f69 Signed-off-by: NGreg Rose <gregory.v.rose@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
On some versions of the firmware, the VF admin send queue may become stalled. In this case, the easiest solution is to just place another descriptor on the queue; the firmware will then process both requests. The early init code already accounts for this, but the runtime code does not. In the watchdog task, check for the stall condition, and if it's found, send our API version to the PF. When the PF replies, just ignore the reply. Change-ID: I380d78185a4f284d649c44d263e648afc9b4d50c Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
Don't enable vector 0 in the ISR, just schedule the adminq task and let it enable the vector. This prevents the task from being called reentrantly. Make sure that the vector is enabled on all exit paths of the adminq task, including error exits. Change-ID: I53f3d14f91ed7a9e90291ea41c681122a5eca5b5 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
There is always a possibility that MSI-X interrupts can get lost. To keep this problem from stalling the driver, we fire all of our MSI-X vectors during the watchdog routine. However, we should not fire the traffic vectors when the interface is closed. In this case, just fire vector 0, which is used for admin queue events. As a result, we do not enable the interrupt cause for vector 0. This can cause the admin queue handler to be called reentrantly, which causes a scary "critical section violation" message to be logged, even though no real damage is done. Change-ID: Ic43a5184708ab2cb9a23fca7dedd808a46717795 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
If we're using VLANs and communications with the PF fail during shutdown, we will leak memory because not all of the VLAN filters will be removed. To eliminate this possibility, go through the list again right before the module is removed and delete any leftover entries. Change-ID: Id3b5315c47ca0a61ae123a96ff345d010bc41aed Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NJim Young <james.m.young@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
If the VF driver is running in the host, the shutdown code is completely broken. We cannot wait in our down routine for the PF to respond to our requests, as its admin queue task will never run while we hold the lock. Instead, we schedule operations, then let the watchdog take care of shutting things down. If the driver is being removed, then wait in the remove routine until the watchdog is done before continuing. Change-ID: I93a58d17389e8d6b58f21e430b56ed7b4590b2c5 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
These messages may be triggered during normal init of the driver if the PF or FW take a long time to respond. There's nothing really wrong, so don't freak people out logging messages. If the communication channel really is dead, then we'll retry a few times and give up. This will log a different more scary message that should cause consternation. This allows the user to more easily detect a genuine failure. Change-ID: I6e2b758d4234a3a09c1015c82c8f2442a697cbdb Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NJim Young <james.m.young@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
These functions are redundant and duplicate functionality found in i40evf_free_all_[tx|rx]_resources. Change-ID: Ia199908926d7a1a4b8247f75f89b5da24c9b149c Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NJim Young <james.m.young@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch A Williams 提交于
If VF drivers are loaded in the host OS, the call to pci_disable_sriov() will cause these drivers' remove routines to be called. If the PF driver has already freed VF resources before this happens, then the VF remove routine can't properly communicate with the PF driver causing all sorts of mayhem and error messages and hurt feelings. To fix this, we move the call to pci_disable_sriov() up to the top of the function and let it complete before freeing any VF resources. Change-ID: I397c3997a00f6408e32b7735273911e499600236 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Acked-by: NShannon Nelson <shannon.nelson@intel.com> Tested-by: NJim Young <james.m.young@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 David S. Miller 提交于
Ying Xue says: ==================== remove nl_sk_hash_lock from netlink socket After tipc socket successfully avoids the involvement of an extra lock with rhashtable_lookup_insert(), it's possible for netlink socket to remove its hash socket lock now. But as netlink socket needs a compare function to look for an object, we first introduce a new function called rhashtable_lookup_compare_insert() in commit #1 which is implemented based on original rhashtable_lookup_insert(). We subsequently remove nl_sk_hash_lock from netlink socket with the new introduced function in commit #2. Lastly, as Thomas requested, we add commit #3 to indicate the implementation of what the grow and shrink decision function must enforce min/max shift. v2: As Thomas pointed out, there was a race between checking portid and then setting it in commit #2. Now use socket lock to make the process of both checking and setting portid atomic, and then eliminate the race. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
As commit c0c09bfd ("rhashtable: avoid unnecessary wakeup for worker queue") moves condition statements of verifying whether hash table size exceeds its maximum threshold or reaches its minimum threshold from resizing functions to resizing decision functions, we should add a note in rhashtable.h to indicate the implementation of what the grow and shrink decision function must enforce min/max shift, otherwise, it's failed to take min/max shift's set watermarks into effect. Signed-off-by: NYing Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
As rhashtable_lookup_compare_insert() can guarantee the process of search and insertion is atomic, it's safe to eliminate the nl_sk_hash_lock. After this, object insertion or removal will be protected with per bucket lock on write side while object lookup is guarded with rcu read lock on read side. Signed-off-by: NYing Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Introduce a new function called rhashtable_lookup_compare_insert() which is very similar to rhashtable_lookup_insert(). But the former makes use of users' given compare function to look for an object, and then inserts it into hash table if found. As the entire process of search and insertion is under protection of per bucket lock, this can help users to avoid the involvement of extra lock. Signed-off-by: NYing Xue <ying.xue@windriver.com> Cc: Thomas Graf <tgraf@suug.ch> Acked-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 1月, 2015 27 次提交
-
-
由 David S. Miller 提交于
Pankaj Gupta says: ==================== Increase the limit of tuntap queues Networking under KVM works best if we allocate a per-vCPU rx and tx queue in a virtual NIC. This requires a per-vCPU queue on the host side. Modern physical NICs have multiqueue support for large number of queues. To scale vNIC to run multiple queues parallel to maximum number of vCPU's we need to increase number of queues support in tuntap. Changes from v4: PATCH2: Michael.S.Tsirkin - Updated change comment message. Changes from v3: PATCH1: Michael.S.Tsirkin - Some cleanups and updated commit message. Perf numbers on 10 Gbs NIC Changes from v2: PATCH 3: David Miller - flex array adds extra level of indirection for preallocated array.(dropped, as flow array is allocated using kzalloc with failover to zalloc). Changes from v1: PATCH 2: David Miller - sysctl changes to limit number of queues not required for unprivileged users(dropped). Changes from RFC PATCH 1: Sergei Shtylyov - Add an empty line after declarations. PATCH 2: Jiri Pirko - Do not introduce new module paramaters. Michael.S.Tsirkin- We can use sysctl for limiting max number of queues. This series is to increase the number of tuntap queues. Original work is being done by 'jasowang@redhat.com'. I am taking this 'https://lkml.org/lkml/2013/6/19/29' patch series as a reference. As per discussion in the patch series: There were two reasons which prevented us from increasing number of tun queues: - The netdev_queue array in netdevice were allocated through kmalloc, which may cause a high order memory allocation too when we have several queues. E.g. sizeof(netdev_queue) is 320, which means a high order allocation would happens when the device has more than 16 queues. - We store the hash buckets in tun_struct which results a very large size of tun_struct, this high order memory allocation fail easily when the memory is fragmented. The patch 60877a32 increases the number of tx queues. Memory allocation fallback to vzalloc() when kmalloc() fails. This series tries to address following issues: - Increase the number of netdev_queue queues for rx similarly its done for tx queues by falling back to vzalloc() when memory allocation with kmalloc() fails. - Increase number of queues to 256, maximum number is equal to maximum number of vCPUS allowed in a guest. I have also done testing with multiple parallel Netperf sessions for different combination of queues and CPU's. It seems to be working fine without much increase in cpu load with increase in number of queues. I also see good increase in throughput with increase in number of queues. Though i had limitation of 8 physical CPU's. For this test: Two Hosts(Host1 & Host2) are directly connected with cable Host1 is running Guest1. Data is sent from Host2 to Guest1 via Host1. Host kernel: 3.19.0-rc2+, AMD Opteron(tm) Processor 6320 NIC : Emulex Corporation OneConnect 10Gb NIC (be3) Patch Applied %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle throughput Single Queue, 2 vCPU's ------------- Before Patch :all 0.19 0.00 0.16 0.07 0.04 0.10 0.00 0.18 0.00 99.26 57864.18 After Patch :all 0.99 0.00 0.64 0.69 0.07 0.26 0.00 1.58 0.00 95.77 57735.77 With 2 Queues, 2 vCPU's --------------- Before Patch :all 0.19 0.00 0.19 0.10 0.04 0.11 0.00 0.28 0.00 99.08 63083.09 After Patch :all 0.87 0.00 0.73 0.78 0.09 0.35 0.00 2.04 0.00 95.14 62917.03 With 4 Queues, 4 vCPU's -------------- Before Patch :all 0.20 0.00 0.21 0.11 0.04 0.12 0.00 0.32 0.00 99.00 80865.06 After Patch :all 0.71 0.00 0.93 0.85 0.11 0.51 0.00 2.62 0.00 94.27 86463.19 With 8 Queues, 8 vCPU's -------------- Before Patch :all 0.19 0.00 0.18 0.09 0.04 0.11 0.00 0.23 0.00 99.17 86795.31 After Patch :all 0.65 0.00 1.18 0.93 0.13 0.68 0.00 3.38 0.00 93.05 89459.93 With 16 Queues, 8 vCPU's -------------- After Patch :all 0.61 0.00 1.59 0.97 0.18 0.92 0.00 4.32 0.00 91.41 120951.60 ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pankaj Gupta 提交于
Networking under kvm works best if we allocate a per-vCPU RX and TX queue in a virtual NIC. This requires a per-vCPU queue on the host side. It is now safe to increase the maximum number of queues. Preceding patch: 'net: allow large number of rx queues' made sure this won't cause failures due to high order memory allocations. Increase it to 256: this is the max number of vCPUs KVM supports. Size of tun_struct changes from 8512 to 10496 after this patch. This keeps pages allocated for tun_struct before and after the patch to 3. Signed-off-by: NPankaj Gupta <pagupta@redhat.com> Reviewed-by: NDavid Gibson <dgibson@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pankaj Gupta 提交于
netif_alloc_rx_queues() uses kcalloc() to allocate memory for "struct netdev_queue *_rx" array. If we are doing large rx queue allocation kcalloc() might fail, so this patch does a fallback to vzalloc(). Similar implementation is done for tx queue allocation in netif_alloc_netdev_queues(). We avoid failure of high order memory allocation with the help of vzalloc(), this allows us to do large rx and tx queue allocation which in turn helps us to increase the number of queues in tun. As vmalloc() adds overhead on a critical network path, __GFP_REPEAT flag is used with kzalloc() to do this fallback only when really needed. Signed-off-by: NPankaj Gupta <pagupta@redhat.com> Reviewed-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NDavid Gibson <dgibson@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kenneth Williams 提交于
The deleted lines are called from a function which is called: 1) Only through __team_options_register via team_options_register and 2) Only during initialization / mode initialization when there are no ports attached. Therefore the ports list is guarenteed to be empty and this code will never be executed. Signed-off-by: NKenneth Williams <ken@williamsclan.us> Acked-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Decotigny 提交于
Signed-off-by: NDavid Decotigny <decot@googlers.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rickard Strandqvist 提交于
Remove the function teql_neigh_release() that is not used anywhere. This was partially found by using a static code analysis program called cppcheck. Signed-off-by: NRickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rickard Strandqvist 提交于
Remove the function aead_entries() that is not used anywhere. This was partially found by using a static code analysis program called cppcheck. Signed-off-by: NRickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Roopa Prabhu says: ==================== bridge: support for vlan range in setlink/dellink This series adds new flags in IFLA_BRIDGE_VLAN_INFO to indicate vlan range. Will post corresponding iproute2 patches if these get accepted. v1-> v2 - changed patches to use a nested list attribute IFLA_BRIDGE_VLAN_INFO_LIST as suggested by scott feldman - dropped notification changes from the series. Will post them separately after this range message is accepted. v2 -> v3 - incorporated some review feedback - include patches to fill vlan ranges during getlink - Dropped IFLA_BRIDGE_VLAN_INFO_LIST. I think it may get confusing to userspace if we introduce yet another way to send lists. With getlink already sending nested IFLA_BRIDGE_VLAN_INFO in IFLA_AF_SPEC, It seems better to use the existing format for lists and just use the flags from v2 to mark vlan ranges ==================== Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: NWilson Kok <wkok@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Roopa Prabhu 提交于
This patch adds new function to pack vlans into ranges whereever applicable using the flags BRIDGE_VLAN_INFO_RANGE_BEGIN and BRIDGE VLAN_INFO_RANGE_END Old vlan packing code is moved to a new function and continues to be called when filter_mask is RTEXT_FILTER_BRVLAN. Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Roopa Prabhu 提交于
This filter is same as RTEXT_FILTER_BRVLAN except that it tries to compress the consecutive vlans into ranges. This helps on systems with large number of configured vlans. Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Roopa Prabhu 提交于
This patch changes bridge IFLA_AF_SPEC netlink attribute parser to look for more than one IFLA_BRIDGE_VLAN_INFO attribute. This allows userspace to pack more than one vlan in the setlink msg. The dumps were already sending more than one vlan info in the getlink msg. This patch also adds bridge_vlan_info flags BRIDGE_VLAN_INFO_RANGE_BEGIN and BRIDGE_VLAN_INFO_RANGE_END to indicate start and end of vlan range This patch also deletes unused ifla_br_policy. Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vincenzo Maffione 提交于
This patch removes some unused arrays from the netfront private data structures. These arrays were used in "flip" receive mode. Signed-off-by: NVincenzo Maffione <v.maffione@gmail.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Shrikrishna Khare 提交于
Failing to reinitialize on wakeup results in loss of network connectivity for vmxnet3 interface. Signed-off-by: NSrividya Murali <smurali@vmware.com> Signed-off-by: NShrikrishna Khare <skhare@vmware.com> Reviewed-by: NShreyas N Bhatewara <sbhatewara@vmware.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jonathan Toppins 提交于
Remove the empty array element initializer and size the array with BOND_OPT_LAST so the compiler will complain if more elements are in there than should be. An interesting unwanted side effect of this initializer is that if one inserts new options into the middle of the array then this initializer will zero out the option that equals BOND_OPT_TLB_DYNAMIC_LB+1. Example: Extend the OPTS enum: enum { ... BOND_OPT_TLB_DYNAMIC_LB, BOND_OPT_LACP_NEW1, BOND_OPT_LAST }; Now insert into bond_opts array: static const struct bond_option bond_opts[] = { ... [BOND_OPT_LACP_RATE] = { .... unchanged stuff .... }, [BOND_OPT_LACP_NEW1] = { ... new stuff ... }, ... [BOND_OPT_TLB_DYNAMIC_LB] = { .... unchanged stuff ....}, { } // MARK A }; Since BOND_OPT_LACP_NEW1 = BOND_OPT_TLB_DYNAMIC_LB+1, the last initializer (MARK A) will overwrite the contents of BOND_OPT_LACP_NEW1 and can be easily viewed with the crash utility. Signed-off-by: NJonathan Toppins <jtoppins@cumulusnetworks.com> Cc: Andy Gospodarek <gospo@cumulusnetworks.com> Cc: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NAndy Gospodarek <gospo@cumulusnetworks.com> Acked-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Ying Xue says: ==================== tipc: make tipc support namespace This patchset aims to add net namespace support for TIPC stack. Currently TIPC module declares the following global resources: - TIPC network idenfication number - TIPC node table - TIPC bearer list table - TIPC broadcast link - TIPC socket reference table - TIPC name service table - TIPC node address - TIPC service subscriber server - TIPC random value - TIPC netlink In order that TIPC is aware of namespace, above each resource must be allocated, initialized and destroyed inside per namespace. Therefore, the major works of this patchset are to isolate these global resources and make them private for each namespace. However, before these changes come true, some necessary preparation works must be first done: convert socket reference table with generic rhashtable, cleanup core.c and core.h files, remove unnecessary wrapper functions for kernel timer interfaces and so on. It should be noted that commit ##1 ("tipc: fix bug in broadcast retransmit code") was already submitted to 'net' tree, so please see below link: http://patchwork.ozlabs.org/patch/426717/ Since it is prerequisite for the rest of the series to apply, I prepend them to the series. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Currently tipc module only allows users sitting on "init_net" namespace to configure it through netlink interface. But now almost each tipc component is able to be aware of net namespace, so it's time to open the permission for users residing in other namespaces, allowing them to configure their own tipc stack instance through netlink interface. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
After namespace is supported, each namespace should own its private random value. So the global variable representing the random value must be moved to tipc_net structure. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
TIPC establishes one subscriber server which allows users to subscribe their interesting name service status. After tipc supports namespace, one dedicated tipc stack instance is created for each namespace, and each instance can be deemed as one independent TIPC node. As a result, subscriber server must be built for each namespace. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
If net namespace is supported in tipc, each namespace will be treated as a separate tipc node. Therefore, every namespace must own its private tipc node address. This means the "tipc_own_addr" global variable of node address must be moved to tipc_net structure to satisfy the requirement. It's turned out that users also can assign node address for every namespace. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
TIPC name table is used to store the mapping relationship between TIPC service name and socket port ID. When tipc supports namespace, it allows users to publish service names only owned by a certain namespace. Therefore, every namespace must have its private name table to prevent service names published to one namespace from being contaminated by other service names in another namespace. Therefore, The name table global variable (ie, nametbl) and its lock must be moved to tipc_net structure, and a parameter of namespace must be added for necessary functions so that they can obtain name table variable defined in tipc_net structure. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Now tipc socket table is statically allocated as a global variable. Through it, we can look up one socket instance with port ID, insert a new socket instance to the table, and delete a socket from the table. But when tipc supports net namespace, each namespace must own its specific socket table. So the global variable of socket table must be redefined in tipc_net structure. As a concequence, a new socket table will be allocated when a new namespace is created, and a socket table will be deallocated when namespace is destroyed. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
TIPC broadcast link is statically established and its relevant states are maintained with the global variables: "bcbearer", "bclink" and "bcl". Allowing different namespace to own different broadcast link instances, these variables must be moved to tipc_net structure and broadcast link instances would be allocated and initialized when namespace is created. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Bearer list defined as a global variable is used to store bearer instances. When tipc supports net namespace, bearers created in one namespace must be isolated with others allocated in other namespaces, which requires us that the bearer list(bearer_list) must be moved to tipc_net structure. As a result, a net namespace pointer has to be passed to functions which access the bearer list. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Global variables associated with node table are below: - node table list (node_htable) - node hash table list (tipc_node_list) - node table lock (node_list_lock) - node number counter (tipc_num_nodes) - node link number counter (tipc_num_links) To make node table support namespace, above global variables must be moved to tipc_net structure in order to keep secret for different namespaces. As a consequence, these variables are allocated and initialized when namespace is created, and deallocated when namespace is destroyed. After the change, functions associated with these variables have to utilize a namespace pointer to access them. So adding namespace pointer as a parameter of these functions is the major change made in the commit. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Involve namespace infrastructure, make the "tipc_net_id" global variable aware of per namespace, and rename it to "net_id". In order that the conversion can be successfully done, an instance of networking namespace must be passed to relevant functions, allowing them to access the "net_id" variable of per namespace. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
In order to make tipc socket table aware of namespace, a networking namespace instance must be passed to tipc_sk_lookup(), allowing it to look up tipc socket instance with a given port ID from a concrete socket table. However, as now tipc_sk_timeout() only has one port ID parameter and is not namespace aware, it's unable to obtain a correct socket instance through tipc_sk_lookup() just with a port ID, especially after namespace is completely supported. If port ID is replaced with socket instance as tipc_sk_timeout()'s parameter, it's unnecessary to look up socket table. But as the timer handler - tipc_sk_timeout() is run asynchronously, socket reference must be held before its timer is launched, and must be carefully checked to identify whether the socket reference needs to be put or not when its timer is terminated. Signed-off-by: NYing Xue <ying.xue@windriver.com> Tested-by: NTero Aho <Tero.Aho@coriant.com> Reviewed-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-