summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2014-09-13bonding: 3ad: convert to bond->mode_lockNikolay Aleksandrov3-57/+22
Now that we have bond->mode_lock, we can remove the state_machine_lock and use it in its place. There're no fast paths requiring the per-port spinlocks so it should be okay to consolidate them into mode_lock. Also move it inside the unbinding function as we don't want to expose mode_lock outside of the specific modes. Suggested-by: Jay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13bonding: alb: convert to bond->mode_lockNikolay Aleksandrov4-89/+35
The ALB/TLB specific spinlocks are no longer necessary as we now have bond->mode_lock for this purpose, so convert them and remove them from struct alb_bond_info. Also remove the unneeded lock/unlock functions and use spin_lock/unlock directly. Suggested-by: Jay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13bonding: convert curr_slave_lock to a spinlock and rename itNikolay Aleksandrov3-7/+6
curr_slave_lock is now a misleading name, a much better name is mode_lock as it'll be used for each mode's purposes and it's no longer necessary to use a rwlock, a simple spinlock is enough. Suggested-by: Jay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13bonding: clean curr_slave_lock useNikolay Aleksandrov4-70/+14
Mostly all users of curr_slave_lock already have RTNL as we've discussed previously so there's no point in using it, the one case where the lock must stay is the 3ad code, in fact it's the only one. It's okay to remove it from bond_do_fail_over_mac() as it's called with RTNL and drops the curr_slave_lock anyway. bond_change_active_slave() is one of the main places where curr_slave_lock was used, it's okay to remove it as all callers use RTNL these days before calling it, that's why we move the ASSERT_RTNL() in the beginning to catch any potential offenders to this rule. The RTNL argument actually applies to all of the places where curr_slave_lock has been removed from in this patch. Also remove the unnecessary bond_deref_active_protected() macro and use rtnl_dereference() instead. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13bonding: alb: remove curr_slave_lockNikolay Aleksandrov1-36/+3
First in rlb_teach_disabled_mac_on_primary() it's okay to remove curr_slave_lock as all callers except bond_alb_monitor() already hold RTNL, and in case bond_alb_monitor() is executing we can at most have a period with bad throughput (very unlikely though). In bond_alb_monitor() it's okay to remove the read_lock as the slave list is walked with RCU and the worst that could happen is another transmitter at the same time and thus for a period which currently is 10 seconds (bond_alb.h: BOND_ALB_LP_TICKS). And bond_alb_handle_active_change() is okay because it's always called with RTNL. Removed the ASSERT_RTNL() because it'll be inserted in the parent function in a following patch. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13bonding: 3ad: clean up curr_slave_lock usageNikolay Aleksandrov1-7/+3
Remove the read_lock in bond_3ad_lacpdu_recv() since when the slave is being released its rx_handler is removed before 3ad unbind, so even if packets arrive, they won't see the slave in an inconsistent state. Signed-off-by: Nikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13virtio_ring: unify direct/indirect code paths.Rusty Russell1-76/+52
virtqueue_add() populates the virtqueue descriptor table from the sgs given. If it uses an indirect descriptor table, then it puts a single descriptor in the descriptor table pointing to the kmalloc'ed indirect table where the sg is populated. Previously vring_add_indirect() did the allocation and the simple linear layout. We replace that with alloc_indirect() which allocates the indirect table then chains it like the normal descriptor table so we can reuse the core logic. This slows down pktgen by less than 1/2 a percent (which uses direct descriptors), as well as vring_bench, but it's far neater. vring_bench before: 1061485790-1104800648(1.08254e+09+/-6.6e+06)ns vring_bench after: 1125610268-1183528965(1.14172e+09+/-8e+06)ns pktgen before: 787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (365530384-369498976(3.68028e+08+/-1.1e+06)bps) errors: 0 pktgen after: 779988-790404(786391+/-2.5e+03)pps 361-366(364.35+/-1.3)Mb/sec (361914432-366747456(3.64885e+08+/-1.2e+06)bps) errors: 0 Now, if we make force indirect descriptors by turning off any_header_sg in virtio_net.c: pktgen before: 713773-721062(718374+/-2.1e+03)pps 331-334(332.95+/-0.92)Mb/sec (331190672-334572768(3.33325e+08+/-9.6e+05)bps) errors: 0 pktgen after: 710542-719195(714898+/-2.4e+03)pps 329-333(331.15+/-1.1)Mb/sec (329691488-333706480(3.31713e+08+/-1.1e+06)bps) errors: 0 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13virtio_ring: assume sgs are always well-formed.Rusty Russell1-49/+19
We used to have several callers which just used arrays. They're gone, so we can use sg_next() everywhere, simplifying the code. On my laptop, this slowed down vring_bench by 15%: vring_bench before: 936153354-967745359(9.44739e+08+/-6.1e+06)ns vring_bench after: 1061485790-1104800648(1.08254e+09+/-6.6e+06)ns However, a more realistic test using pktgen on a AMD FX(tm)-8320 saw a few percent improvement: pktgen before: 767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (356068960-367936224(3.64314e+08+/-3e+06)bps) errors: 0 pktgen after: 787781-796334(793165+/-2.4e+03)pps 365-369(367.5+/-1.2)Mb/sec (365530384-369498976(3.68028e+08+/-1.1e+06)bps) errors: 0 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13virtio_net: pass well-formed sgs to virtqueue_add_*()Rusty Russell1-1/+4
This is the only driver which doesn't hand virtqueue_add_inbuf and virtqueue_add_outbuf a well-formed, well-terminated sg. Fix it, so we can make virtio_add_* simpler. pktgen results: modprobe pktgen echo 'add_device eth0' > /proc/net/pktgen/kpktgend_0 echo nowait 1 > /proc/net/pktgen/eth0 echo count 1000000 > /proc/net/pktgen/eth0 echo clone_skb 100000 > /proc/net/pktgen/eth0 echo dst_mac 4e:14:25:a9:30:ac > /proc/net/pktgen/eth0 echo dst 192.168.1.2 > /proc/net/pktgen/eth0 for i in `seq 20`; do echo start > /proc/net/pktgen/pgctrl; tail -n1 /proc/net/pktgen/eth0; done Before: 746547-793084(786421+/-9.6e+03)pps 346-367(364.4+/-4.4)Mb/sec (346397808-367990976(3.649e+08+/-4.5e+06)bps) errors: 0 After: 767390-792966(785159+/-6.5e+03)pps 356-367(363.75+/-2.9)Mb/sec (356068960-367936224(3.64314e+08+/-3e+06)bps) errors: 0 Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13Merge branch 'master' of ↵David S. Miller10-383/+396
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates 2014-09-12 This series contains updates to e1000, ixgbe and ixgbevf. Mark provide two fixes to reduce compile warnings produce by ixgbe and ixgbevf. Alex provides two patches for ixgbe, first removes the receive buffer allocation at the end of the ixgbe_clean_rx_irq(). The reason for removing this is to avoid the extra latency introduced by the MMIO write. Second patch addresses several issues in the current ixgbe implementation of busy poll sockets. It was possible for frames to be delivered out of order if they were held in GRO, so address this by flushing the GRO buffers before releasing the q_vector back to the idle state. Also, we were having to take a spinlock on changing the state to and from idle, so to resolve this, replaced the state value with an atomic and use atomic_cmpxchg to change the value from idle, and a simple atomic set to restore it back to idle after we have acquired it. This allows us to only use a locked operation on acquiring the vector without a need for a locked operation to release it. Florian Westphal provides several patches for e1000 which does some cleanup and updating of the driver. Moved e1000_tbi_adjust_stats() so that he could make the function static. Added a helper function to deal with the tbi workaround that was located in 2 different Rx clean functions. Added a e1000_rx_buffer struct for use on receive since the transmit and receive have different requirements. Updates e1000 to use napi_gro_frags API. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13Merge branch 'sched_rcu'David S. Miller0-0/+0
John Fastabend says: ==================== net/sched rcu classifiers and tcf This series converts the tcf_proto usage to RCU. This requires updating each classifier individually to handle the new copy/update requirement and also to update the core list traversals. This makes the assumption that updates to the tables are infrequent in comparison to the packet per second being classified. On a 10Gbps running near line rate we can easily produce 12+ million packets per second so IMO this is a reasonable assumption. The updates are serialized by RTNL. I have done some basic testing on this series and do not see any immediate splats or issues. The patch series has been running on my dev systems for a month or so now and I've not seen any issues. Although my configurations are not overly complicated. My test cases at this point cover all the filters with a tight loop to add/remove filters. Some basic estimator tests where I add an estimator to the qdisc and verify the statistics accurate using pktgen. And finally I have a small script to exercise the 'tc actions' interface. Feel free to send me more tests off list and I can run them. This is prep work to drop the qdisc lock with the first target being the ingress qdisc. To be done is making the tc actions RCU safe and statistics per cpu. These patches are in the works. Comments: - Checkpatch is still giving errors on some >80 char lines I know about this. IMO the way to fix this is to restructure the sched code to avoid being so heavily indented. But doing this here bloats the patchset and anyways there are already lots of >80 chars in these files. I would prefer to keep the patches as is but let me know if others think I should fix these and I will. A follow up patch set could restructure the code and fix this throughout the code blocks. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: rcu'ify cls_bpfJohn Fastabend1-47/+47
This patch makes the cls_bpf classifier RCU safe. The tcf_lock was being used to protect a list of cls_bpf_prog now this list is RCU safe and updates occur with rcu_replace. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: rcu'ify cls_rsvpJohn Fastabend1-70/+90
Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: make cls_u32 locklessJohn Fastabend1-73/+110
Make cls_u32 classifier safe to run without holding lock. This patch converts statistics that are kept in read section u32_classify into per cpu counters. This patch was tested with a tight u32 filter add/delete loop while generating traffic with pktgen. By running pktgen on vlan devices created on top of a physical device we can hit the qdisc layer correctly. For ingress qdisc's a loopback cable was used. for i in {1..100}; do q=`echo $i%8|bc`; echo -n "u32 tos: iteration $i on queue $q"; tc filter add dev p3p2 parent $p prio $i u32 match ip tos 0x10 0xff \ action skbedit queue_mapping $q; sleep 1; tc filter del dev p3p2 prio $i; echo -n "u32 tos hash table: iteration $i on queue $q"; tc filter add dev p3p2 parent $p protocol ip prio $i handle 628: u32 divisor 1 tc filter add dev p3p2 parent $p protocol ip prio $i u32 \ match ip protocol 17 0xff link 628: offset at 0 mask 0xf00 shift 6 plus 0 tc filter add dev p3p2 parent $p protocol ip prio $i u32 \ ht 628:0 match ip tos 0x10 0xff action skbedit queue_mapping $q sleep 2; tc filter del dev p3p2 prio $i sleep 1; done Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: make cls_u32 per cpuJohn Fastabend1-16/+59
This uses per cpu counters in cls_u32 in preparation to convert over to rcu. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: RCU cls_tcindexJohn Fastabend2-94/+164
Make cls_tcindex RCU safe. This patch addds a new RCU routine rcu_dereference_bh_rtnl() to check caller either holds the rcu read lock or RTNL. This is needed to handle the case where tcindex_lookup() is being called in both cases. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: RCU cls_routeJohn Fastabend1-94/+132
RCUify the route classifier. For now however spinlock's are used to protect fastmap cache. The issue here is the fastmap may be read by one CPU while the cache is being updated by another. An array of pointers could be one possible solution. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: fw use RCUJohn Fastabend1-34/+77
RCU'ify fw classifier. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: cls_flow use RCUJohn Fastabend1-61/+84
Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: cls_cgroup use RCUJohn Fastabend1-24/+39
Make cgroup classifier safe for RCU. Also drops the calls in the classify routine that were doing a rcu_read_lock()/rcu_read_unlock(). If the rcu_read_lock() isn't held entering this routine we have issues with deleting the classifier chain so remove the unnecessary rcu_read_lock()/rcu_read_unlock() pair noting all paths AFAIK hold rcu_read_lock. If there is a case where classify is called without the rcu read lock then an rcu splat will occur and we can correct it. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: cls_basic use RCUJohn Fastabend1-35/+45
Enable basic classifier for RCU. Dereferencing tp->root may look a bit strange here but it is needed by my accounting because it is allocated at init time and needs to be kfree'd at destroy time. However because it may be referenced in the classify() path we must wait an RCU grace period before free'ing it. We use kfree_rcu() and rcu_ APIs to enforce this. This pattern is used in all the classifiers. Also the hgenerator can be incremented without concern because it is always incremented under RTNL. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: rcu-ify tcf_protoJohn Fastabend17-88/+121
rcu'ify tcf_proto this allows calling tc_classify() without holding any locks. Updaters are protected by RTNL. This patch prepares the core net_sched infrastracture for running the classifier/action chains without holding the qdisc lock however it does nothing to ensure cls_xxx and act_xxx types also work without locking. Additional patches are required to address the fall out. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: qdisc: use rcu prefix and silence sparse warningsJohn Fastabend6-42/+82
Add __rcu notation to qdisc handling by doing this we can make smatch output more legible. And anyways some of the cases should be using rcu_dereference() see qdisc_all_tx_empty(), qdisc_tx_chainging(), and so on. Also *wake_queue() API is commonly called from driver timer routines without rcu lock or rtnl lock. So I added rcu_read_lock() blocks around netif_wake_subqueue and netif_tx_wake_queue. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: rcu'ify cls_bpfJohn Fastabend1-47/+47
This patch makes the cls_bpf classifier RCU safe. The tcf_lock was being used to protect a list of cls_bpf_prog now this list is RCU safe and updates occur with rcu_replace. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: rcu'ify cls_rsvpJohn Fastabend1-70/+90
Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: make cls_u32 locklessJohn Fastabend1-73/+110
Make cls_u32 classifier safe to run without holding lock. This patch converts statistics that are kept in read section u32_classify into per cpu counters. This patch was tested with a tight u32 filter add/delete loop while generating traffic with pktgen. By running pktgen on vlan devices created on top of a physical device we can hit the qdisc layer correctly. For ingress qdisc's a loopback cable was used. for i in {1..100}; do q=`echo $i%8|bc`; echo -n "u32 tos: iteration $i on queue $q"; tc filter add dev p3p2 parent $p prio $i u32 match ip tos 0x10 0xff \ action skbedit queue_mapping $q; sleep 1; tc filter del dev p3p2 prio $i; echo -n "u32 tos hash table: iteration $i on queue $q"; tc filter add dev p3p2 parent $p protocol ip prio $i handle 628: u32 divisor 1 tc filter add dev p3p2 parent $p protocol ip prio $i u32 \ match ip protocol 17 0xff link 628: offset at 0 mask 0xf00 shift 6 plus 0 tc filter add dev p3p2 parent $p protocol ip prio $i u32 \ ht 628:0 match ip tos 0x10 0xff action skbedit queue_mapping $q sleep 2; tc filter del dev p3p2 prio $i sleep 1; done Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: make cls_u32 per cpuJohn Fastabend1-16/+59
This uses per cpu counters in cls_u32 in preparation to convert over to rcu. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: RCU cls_tcindexJohn Fastabend2-94/+164
Make cls_tcindex RCU safe. This patch addds a new RCU routine rcu_dereference_bh_rtnl() to check caller either holds the rcu read lock or RTNL. This is needed to handle the case where tcindex_lookup() is being called in both cases. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: RCU cls_routeJohn Fastabend1-94/+132
RCUify the route classifier. For now however spinlock's are used to protect fastmap cache. The issue here is the fastmap may be read by one CPU while the cache is being updated by another. An array of pointers could be one possible solution. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: fw use RCUJohn Fastabend1-34/+77
RCU'ify fw classifier. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: cls_flow use RCUJohn Fastabend1-61/+84
Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: cls_cgroup use RCUJohn Fastabend1-24/+39
Make cgroup classifier safe for RCU. Also drops the calls in the classify routine that were doing a rcu_read_lock()/rcu_read_unlock(). If the rcu_read_lock() isn't held entering this routine we have issues with deleting the classifier chain so remove the unnecessary rcu_read_lock()/rcu_read_unlock() pair noting all paths AFAIK hold rcu_read_lock. If there is a case where classify is called without the rcu read lock then an rcu splat will occur and we can correct it. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: sched: cls_basic use RCUJohn Fastabend1-35/+45
Enable basic classifier for RCU. Dereferencing tp->root may look a bit strange here but it is needed by my accounting because it is allocated at init time and needs to be kfree'd at destroy time. However because it may be referenced in the classify() path we must wait an RCU grace period before free'ing it. We use kfree_rcu() and rcu_ APIs to enforce this. This pattern is used in all the classifiers. Also the hgenerator can be incremented without concern because it is always incremented under RTNL. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: rcu-ify tcf_protoJohn Fastabend17-88/+121
rcu'ify tcf_proto this allows calling tc_classify() without holding any locks. Updaters are protected by RTNL. This patch prepares the core net_sched infrastracture for running the classifier/action chains without holding the qdisc lock however it does nothing to ensure cls_xxx and act_xxx types also work without locking. Additional patches are required to address the fall out. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-13net: qdisc: use rcu prefix and silence sparse warningsJohn Fastabend6-42/+82
Add __rcu notation to qdisc handling by doing this we can make smatch output more legible. And anyways some of the cases should be using rcu_dereference() see qdisc_all_tx_empty(), qdisc_tx_chainging(), and so on. Also *wake_queue() API is commonly called from driver timer routines without rcu lock or rtnl lock. So I added rcu_read_lock() blocks around netif_wake_subqueue and netif_tx_wake_queue. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12sunvnet: Avoid sending superfluous LDC messages.Sowmini Varadhan2-5/+72
When sending out a burst of packets across multiple descriptors, it is sufficient to send one LDC "start" trigger for the first descriptor, so do not send an LDC "start" for every pass through vnet_start_xmit. Similarly, it is sufficient to send one "DRING_STOPPED" trigger for the last dring (and if that fails, hold off and send the trigger later). Optimizations to the number of LDC messages helps avoid filling up the LDC channel with superfluous LDC messages that risk triggering flow-control on the channel, and also boosts performance. Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Raghuram Kothakota <raghuram.kothakota@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12net: axienet: remove unnecessary ether_setup after alloc_etherdevSubbaraya Sundeep Bhatta1-1/+0
calling ether_setup is redundant since alloc_etherdev calls it. Signed-off-by: Subbaraya Sundeep Bhatta <sbhatta@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12ethernet: amd: use pr_info_once()Varka Bhadram1-4/+2
It will use pr_info_one() to print the version info of the driver in probe function only once. No need to use the static variable here. Signed-off-by: Varka Bhadram <varkab@cdac.in> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12udp: Fix inverted NAPI_GRO_CB(skb)->flush testScott Wood2-2/+2
Commit 2abb7cdc0d ("udp: Add support for doing checksum unnecessary conversion") caused napi_gro_cb structs with the "flush" field zero to take the "udp_gro_receive" path rather than the "set flush to 1" path that they would previously take. As a result I saw booting from an NFS root hang shortly after starting userspace, with "server not responding" messages. This change to the handling of "flush == 0" packets appears to be incidental to the goal of adding new code in the case where skb_gro_checksum_validate_zero_check() returns zero. Based on that and the fact that it breaks things, I'm assuming that it is unintentional. Fixes: 2abb7cdc0d ("udp: Add support for doing checksum unnecessary conversion") Cc: Tom Herbert <therbert@google.com> Signed-off-by: Scott Wood <scottwood@freescale.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12Merge branch 'sock_queue_err_skb'David S. Miller2-11/+22
Alexander Duyck says: ==================== Address reference counting issues with sock_queue_err_skb After looking over the code for skb_clone_sk after some comments made by Eric Dumazet I have come to the conclusion that skb_clone_sk is taking the correct approach in how to handle the sk_refcnt when creating a buffer that is eventually meant to be returned to the socket via the sock_queue_err_skb function. However upon review of other callers I found what I believe to be a possible reference count issue in the path for handling "wifi ack" packets. To address this I have applied the same logic that is currently in place so that the sk_refcnt will be forced to stay at least 1, or we will not provide an skb to return in the sk_error_queue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12mac80211: Resolve sk_refcnt/sk_wmem_alloc issue in wifi ack pathAlexander Duyck2-11/+9
There is a possible issue with the use, or lack thereof of sk_refcnt and sk_wmem_alloc in the wifi ack status functionality. Specifically if a socket were to request acknowledgements, and the socket were to have sk_refcnt drop to 0 resulting in it waiting on sk_wmem_alloc to reach 0 it would be possible to have sock_queue_err_skb orphan the last buffer, resulting in __sk_free being called on the socket. After this the buffer is enqueued on sk_error_queue, however the queue has already been flushed resulting in at least a memory leak, if not a data corruption. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12skb: Add documentation for skb_clone_skAlexander Duyck1-0/+13
This change adds some documentation to the call skb_clone_sk. This is meant to help clarify the purpose of the function for other developers. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12Revert "ipv4: Clarify in docs that accept_local requires rp_filter."Sébastien Barré1-8/+3
This reverts commit c801e3cc1925 ("ipv4: Clarify in docs that accept_local requires rp_filter."). It is not needed anymore since commit 1dced6a85482 ("ipv4: Restore accept_local behaviour in fib_validate_source()"). Suggested-by: Julian Anastasov <ja@ssi.bg> Cc: Gregory Detal <gregory.detal@uclouvain.be> Cc: Christoph Paasch <christoph.paasch@uclouvain.be> Cc: Hannes Frederic Sowa <hannes@redhat.com> Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: Sébastien Barré <sebastien.barre@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-09-12e1000: switch to napi_gro_frags apiFlorian Westphal1-17/+32
napi_gro_frags allows skb re-use in case GRO can merge payload pages into an skb on the GRO lists. netperf TCP_STREAM, kvm-e1000 emulation, mtu 9k: Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec old: 87380 16384 16384 30.00 8985.78 new: 87380 16384 16384 30.00 9907.05 Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-09-12e1000: convert to build_skbFlorian Westphal3-120/+131
Instead of preallocating Rx skbs, allocate them right before sending inbound packet up the stack. e1000-kvm, mtu1500, netperf TCP_STREAM: Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec old: 87380 16384 16384 60.00 4532.40 new: 87380 16384 16384 60.00 4599.05 Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-09-12e1000: rename struct e1000_buffer to e1000_tx_bufferFlorian Westphal3-17/+17
and remove *page, its only used for Rx. Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-09-12e1000: add and use e1000_rx_buffer info for RxFlorian Westphal3-23/+27
e1000 uses the same metadata struct for Rx and Tx. But Tx and Rx have different requirements. For Rx, we only need to store a buffer and a DMA address. Follow-up patch will remove skb for Rx, bringing rx_buffer_info down to 16 bytes on x86_64. [ buffer_info is 48 bytes ] Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-09-12e1000: perform copybreak ahead of DMA unmapFlorian Westphal1-30/+43
Currently we unmap the DMA range, then copy to new skb. Change this so we can keep the mapping in case the data is copied. Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-09-12e1000: move tbi workaround code into helper functionFlorian Westphal1-30/+33
Its the same in both handlers. Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2014-09-12e1000: move e1000_tbi_adjust_stats to where its usedFlorian Westphal3-80/+77
... and make it static. Signed-off-by: Florian Westphal <fw@strlen.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>