summaryrefslogtreecommitdiffstats
path: root/net/core
AgeCommit message (Collapse)AuthorFilesLines
2006-09-17[NEIGH]: neigh_table_clear() doesn't free statsKirill Korotaev1-0/+3
neigh_table_clear() doesn't free tbl->stats. Found by Alexey Kuznetsov. Though Alexey considers this leak minor for mainstream, I still believe that cleanup code should not forget to free some of the resources :) At least, this is critical for OpenVZ with virtualized neighbour tables. Signed-Off-By: Kirill Korotaev <dev@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-17[NET]: Disallow whitespace in network device names.David S. Miller1-5/+14
It causes way too much trouble and confusion in userspace. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-17[NET]: Fix potential stack overflow in net/core/utils.cSuresh Siddha1-3/+4
On High end systems (1024 or so cpus) this can potentially cause stack overflow. Fix the stack usage. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-17[VLAN]: Make sure bonding packet drop checks get done in hwaccel RX path.David S. Miller1-17/+1
Since __vlan_hwaccel_rx() is essentially bypassing the netif_receive_skb() call that would have occurred if we did the VLAN decapsulation in software, we are missing the skb_bond() call and the assosciated checks it does. Export those checks via an inline function, skb_bond_should_drop(), and use this in __vlan_hwaccel_rx(). Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-09Merge gregkh@master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6Greg Kroah-Hartman4-4/+22
2006-08-09[NET]: add_timer -> mod_timer() in dst_run_gc()Dmitry Mishin1-2/+1
Patch from Dmitry Mishin <dim@openvz.org>: Replace add_timer() by mod_timer() in dst_run_gc in order to avoid BUG message. CPU1 CPU2 dst_run_gc() entered dst_run_gc() entered spin_lock(&dst_lock) ..... del_timer(&dst_gc_timer) fail to get lock .... mod_timer() <--- puts timer back to the list add_timer(&dst_gc_timer) <--- BUG because timer is in list already. Found during OpenVZ internal testing. At first we thought that it is OpenVZ specific as we added dst_run_gc(0) call in dst_dev_event(), but as Alexey pointed to me it is possible to trigger this condition in mainstream kernel. F.e. timer has fired on CPU2, but the handler was preeempted by an irq before dst_lock is tried. Meanwhile, someone on CPU1 adds an entry to gc list and starts the timer. If CPU2 was preempted long enough, this timer can expire simultaneously with resuming timer handler on CPU1, arriving exactly to the situation described. Signed-off-by: Dmitry Mishin <dim@openvz.org> Signed-off-by: Kirill Korotaev <dev@openvz.org> Signed-off-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-08[RTNETLINK]: Fix IFLA_ADDRESS handling.David S. Miller1-1/+14
The ->set_mac_address handlers expect a pointer to a sockaddr which contains the MAC address, whereas IFLA_ADDRESS provides just the MAC address itself. So whip up a sockaddr to wrap around the netlink attribute for the ->set_mac_address call. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-07[PKTGEN]: Make sure skb->{nh,h} are initialized in fill_packet_ipv6() too.David S. Miller1-0/+2
Mirror the bug fix from fill_packet_ipv4() Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-07[PKTGEN]: Fix oops when used with balance-tlb bondingChen-Li Tien1-0/+2
Signed-off-by: Chen-Li Tien <cltien@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-07[NET]: Assign skb->dev in netdev_alloc_skbChristoph Hellwig1-1/+3
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-04[PATCH] Send wireless netlink events with a clean slateHerbert Xu1-1/+23
Drivers expect to be able to call wireless_send_event in arbitrary contexts. On the other hand, netlink really doesn't like being invoked in an IRQ context. So we need to postpone the sending of netlink skb's to a tasklet. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2006-08-02[NET]: Fix more per-cpu typosAlexey Dobriyan1-2/+2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[I/OAT]: Remove CPU hotplug lock from net_dma_rebalanceChris Leech1-5/+0
Remove the lock_cpu_hotplug()/unlock_cpu_hotplug() calls from net_dma_rebalance The lock_cpu_hotplug()/unlock_cpu_hotplug() sequence in net_dma_rebalance is both incorrect (as pointed out by David Miller) because lock_cpu_hotplug() may sleep while the net_dma_event_lock spinlock is held, and unnecessary (as pointed out by Andrew Morton) as spin_lock() disables preemption which protects from CPU hotplug events. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[NET]: skb_queue_lock_key() is no longer used.Adrian Bunk1-7/+0
Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[NET]: Kill the WARN_ON() calls for checksum fixups.David S. Miller1-10/+0
We have a more complete solution in the works, involving the seperation of CHECKSUM_HW on input vs. output, and having netfilter properly do incremental checksums. But that is a very involved patch and is thus 2.6.19 material. What we have now is infinitely better than the past, wherein all TSO packets were dropped due to corrupt checksums as soon at the NAT module was loaded. At least now, the checksums do get fixed up, it just isn't the cleanest nor most optimal solution. Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[NET]: Add netdev_alloc_skb().Christoph Hellwig1-0/+24
Add a dev_alloc_skb variant that takes a struct net_device * paramater. For now that paramater is unused, but I'll use it to allocate the skb from node-local memory in a follow-up patch. Also there have been some other plans mentioned on the list that can use it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[NET]: Core net changes to generate neteventsTom Tucker2-7/+9
Generate netevents for: - neighbour changes - routing redirects - pmtu changes Signed-off-by: Tom Tucker <tom@opengridcomputing.com> Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[NET]: Network Event Notifier Mechanism.Tom Tucker1-0/+69
This patch uses notifier blocks to implement a network event notifier mechanism. Clients register their callback function by calling register_netevent_notifier() like this: static struct notifier_block nb = { .notifier_call = my_callback_func }; ... register_netevent_notifier(&nb); Signed-off-by: Tom Tucker <tom@opengridcomputing.com> Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-08-02[NET]: Fix ___pskb_trim when entire frag_list needs droppingHerbert Xu1-4/+10
When the trim point is within the head and there is no paged data, ___pskb_trim fails to drop the first element in the frag_list. This patch fixes this by moving the len <= offset case out of the page data loop. This patch also adds a missing kfree_skb on the frag that we just cloned. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-21[I/OAT]: net/core/user_dma.c should #include <net/netdma.h>Adrian Bunk1-0/+1
Every file should #include the headers containing the prototypes for its global functions. Especially in cases like this one where gcc can tell us through a compile error that the prototype was wrong... Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-17[NET] ethtool: fix oops by testing correct struct memberJeff Garzik1-1/+1
Noticed by Willy Tarreau. Signed-off-by: Jeff Garzik <jeff@garzik.org>
2006-07-13[NET]: Update frag_list in pskb_trimHerbert Xu1-26/+65
When pskb_trim has to defer to ___pksb_trim to trim the frag_list part of the packet, the frag_list is not updated to reflect the trimming. This will usually work fine until you hit something that uses the packet length or tail from the frag_list. Examples include esp_output and ip_fragment. Another problem caused by this is that you can end up with a linear packet with a frag_list attached. It is possible to get away with this if we audit everything to make sure that they always consult skb->len before going down onto frag_list. In fact we can do the samething for the paged part as well to avoid copying the data area of the skb. For now though, let's do the conservative fix and update frag_list. Many thanks to Marco Berizzi for helping me to track down this bug. This 4-year old bug took 3 months to track down. Marco was very patient indeed :) Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-12[NET]: fix __sk_stream_mem_reclaimIan McDonald1-9/+7
__sk_stream_mem_reclaim is only called by sk_stream_mem_reclaim. As such the check on sk->sk_forward_alloc is not needed and can be removed. Signed-off-by: Ian McDonald <ian.mcdonald@jandi.co.nz> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-08[NET] gso: Fix up GSO packets with broken checksumsHerbert Xu1-4/+32
Certain subsystems in the stack (e.g., netfilter) can break the partial checksum on GSO packets. Until they're fixed, this patch allows this to work by recomputing the partial checksums through the GSO mechanism. Once they've all been converted to update the partial checksum instead of clearing it, this workaround can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-07[NET]: Fix network device interface printk message priorityStephen Hemminger1-3/+3
The printk's in the network device interface code should all be tagged with severity. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-07-03[PATCH] lockdep: annotate sk_locksIngo Molnar1-9/+88
Teach sk_lock semantics to the lock validator. In the softirq path the slock has mutex_trylock()+mutex_unlock() semantics, in the process context sock_lock() case it has mutex_lock()/mutex_unlock() semantics. Thus we treat sock_owned_by_user() flagged areas as an exclusion area too, not just those areas covered by a held sk_lock.slock. Effect on non-lockdep kernels: minimal, sk_lock_sock_init() has been turned into an inline function. Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-03[PATCH] lockdep: annotate sock_lock_init()Ingo Molnar1-0/+16
Teach special (multi-initialized, per-address-family) locking code to the lock validator. Has no effect on non-lockdep kernels. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-03[PATCH] lockdep: annotate skb_queue_head_initIngo Molnar1-0/+7
Teach special (multi-initialized) locking code to the lock validator. Has no effect on non-lockdep kernels. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel8-8/+0
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-29[NET]: make skb_release_data() staticAdrian Bunk1-1/+1
skb_release_data() no longer has any users in other files. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-29[AF_UNIX]: Datagram getpeersecCatherine Zhang1-0/+11
This patch implements an API whereby an application can determine the label of its peer's Unix datagram sockets via the auxiliary data mechanism of recvmsg. Patch purpose: This patch enables a security-aware application to retrieve the security context of the peer of a Unix datagram socket. The application can then use this security context to determine the security context for processing on behalf of the peer who sent the packet. Patch design and implementation: The design and implementation is very similar to the UDP case for INET sockets. Basically we build upon the existing Unix domain socket API for retrieving user credentials. Linux offers the API for obtaining user credentials via ancillary messages (i.e., out of band/control messages that are bundled together with a normal message). To retrieve the security context, the application first indicates to the kernel such desire by setting the SO_PASSSEC option via getsockopt. Then the application retrieves the security context using the auxiliary data mechanism. An example server application for Unix datagram socket should look like this: toggle = 1; toggle_len = sizeof(toggle); setsockopt(sockfd, SOL_SOCKET, SO_PASSSEC, &toggle, &toggle_len); recvmsg(sockfd, &msg_hdr, 0); if (msg_hdr.msg_controllen > sizeof(struct cmsghdr)) { cmsg_hdr = CMSG_FIRSTHDR(&msg_hdr); if (cmsg_hdr->cmsg_len <= CMSG_LEN(sizeof(scontext)) && cmsg_hdr->cmsg_level == SOL_SOCKET && cmsg_hdr->cmsg_type == SCM_SECURITY) { memcpy(&scontext, CMSG_DATA(cmsg_hdr), sizeof(scontext)); } } sock_setsockopt is enhanced with a new socket option SOCK_PASSSEC to allow a server socket to receive security context of the peer. Testing: We have tested the patch by setting up Unix datagram client and server applications. We verified that the server can retrieve the security context using the auxiliary data mechanism of recvmsg. Signed-off-by: Catherine Zhang <cxzhang@watson.ibm.com> Acked-by: Acked-by: James Morris <jmorris@namei.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-29[NET]: Make illegal_highdma more analHerbert Xu1-4/+2
Rather than having illegal_highdma as a macro when HIGHMEM is off, we can turn it into an inline function that returns zero. This will catch callers that give it bad arguments. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-29[NETLINK]: Encapsulate eff_cap usage within security framework.Darrel Goeddel1-1/+1
This patch encapsulates the usage of eff_cap (in netlink_skb_params) within the security framework by extending security_netlink_recv to include a required capability parameter and converting all direct usage of eff_caps outside of the lsm modules to use the interface. It also updates the SELinux implementation of the security_netlink_send and security_netlink_recv hooks to take advantage of the sid in the netlink_skb_params struct. This also enables SELinux to perform auditing of netlink capability checks. Please apply, for 2.6.18 if possible. Signed-off-by: Darrel Goeddel <dgoeddel@trustedcs.com> Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Acked-by: James Morris <jmorris@namei.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-29[NET]: Added GSO header verificationHerbert Xu2-11/+27
When GSO packets come from an untrusted source (e.g., a Xen guest domain), we need to verify the header integrity before passing it to the hardware. Since the first step in GSO is to verify the header, we can reuse that code by adding a new bit to gso_type: SKB_GSO_DODGY. Packets with this bit set can only be fed directly to devices with the corresponding bit NETIF_F_GSO_ROBUST. If the device doesn't have that bit, then the skb is fed to the GSO engine which will allow the packet to be sent to the hardware if it passes the header check. This patch changes the sg flag to a full features flag. The same method can be used to implement TSO ECN support. We simply have to mark packets with CWR set with SKB_GSO_ECN so that only hardware with a corresponding NETIF_F_TSO_ECN can accept them. The GSO engine can either fully segment the packet, or segment the first MTU and pass the rest to the hardware for further segmentation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-26[NET] netpoll: break recursive loop in netpoll rx pathNeil Horman1-2/+24
The netpoll system currently has a rx to tx path via: netpoll_rx __netpoll_rx arp_reply netpoll_send_skb dev->hard_start_tx This rx->tx loop places network drivers at risk of inadvertently causing a deadlock or BUG halt by recursively trying to acquire a spinlock that is used in both their rx and tx paths (this problem was origionally reported to me in the 3c59x driver, which shares a spinlock between the boomerang_interrupt and boomerang_start_xmit routines). This patch breaks this loop, by queueing arp frames, so that they can be responded to after all receive operations have been completed. Tested by myself and the reported with successful results. Specifically it was tested with netdump. Heres the BZ with details: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=194055 Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-26[NET] netpoll: don't spin forever sending to stopped queuesJeremy Fitzhardinge1-7/+3
When transmitting a skb in netpoll_send_skb(), only retry a limited number of times if the device queue is stopped. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-26[NET]: skb_find_text ignores to argumentPhil Oester1-1/+4
skb_find_text takes a "to" argument which is supposed to limit how far into the skb it will search for the given text. At present, it seems to ignore that argument on the first skb, and instead return a match even if the text occurs beyond the limit. Patch below fixes this, after adjusting for the "from" starting point. This consequently fixes the netfilter string match's "--to" handling, which currently is broken. Signed-off-by: Phil Oester <kernel@linuxace.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-25[NET]: make net/core/dev.c:netdev_nit staticAdrian Bunk1-1/+1
netdev_nit can now become static. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-25[NET]: Fix GSO problems in dev_hard_start_xmit()Michael Chan1-0/+3
Fix 2 problems in dev_hard_start_xmit(): 1. nskb->next needs to link back to skb->next if hard_start_xmit() returns non-zero. 2. Since the total number of GSO fragments may exceed MAX_SKB_FRAGS + 1, it needs to stop transmitting if the netif_queue is stopped. Signed-off-by: Michael Chan <mchan@broadcom.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds3-27/+317
* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: [NET]: Require CAP_NET_ADMIN to create tuntap devices. [NET]: fix net-core kernel-doc [TCP]: Move inclusion of <linux/dmaengine.h> to correct place in <linux/tcp.h> [IPSEC]: Handle GSO packets [NET]: Added GSO toggle [NET]: Add software TSOv4 [NET]: Add generic segmentation offload [NET]: Merge TSO/UFO fields in sk_buff [NET]: Prevent transmission after dev_deactivate [IPV6] ADDRCONF: Fix default source address selection without CONFIG_IPV6_PRIVACY [IPV6]: Fix source address selection. [NET]: Avoid allocating skb in skb_pad
2006-06-23[PATCH] list: use list_replace_init() instead of list_splice_init()Oleg Nesterov2-6/+5
list_splice_init(list, head) does unneeded job if it is known that list_empty(head) == 1. We can use list_replace_init() instead. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23[NET]: fix net-core kernel-docRandy Dunlap1-2/+2
Warning(/var/linsrc/linux-2617-g4//include/linux/skbuff.h:304): No description found for parameter 'dma_cookie' Warning(/var/linsrc/linux-2617-g4//include/net/sock.h:1274): No description found for parameter 'copied_early' Warning(/var/linsrc/linux-2617-g4//net/core/dev.c:3309): No description found for parameter 'chan' Warning(/var/linsrc/linux-2617-g4//net/core/dev.c:3309): No description found for parameter 'event' Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23[NET]: Added GSO toggleHerbert Xu1-0/+29
This patch adds a generic segmentation offload toggle that can be turned on/off for each net device. For now it only supports in TCPv4. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23[NET]: Add software TSOv4Herbert Xu1-0/+126
This patch adds the GSO implementation for IPv4 TCP. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23[NET]: Add generic segmentation offloadHerbert Xu1-5/+122
This patch adds the infrastructure for generic segmentation offload. The idea is to tap into the potential savings of TSO without hardware support by postponing the allocation of segmented skb's until just before the entry point into the NIC driver. The same structure can be used to support software IPv6 TSO, as well as UFO and segmentation offload for other relevant protocols, e.g., DCCP. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23[NET]: Merge TSO/UFO fields in sk_buffHerbert Xu1-7/+9
Having separate fields in sk_buff for TSO/UFO (tso_size/ufo_size) is not going to scale if we add any more segmentation methods (e.g., DCCP). So let's merge them. They were used to tell the protocol of a packet. This function has been subsumed by the new gso_type field. This is essentially a set of netdev feature bits (shifted by 16 bits) that are required to process a specific skb. As such it's easy to tell whether a given device can process a GSO skb: you just have to and the gso_type field and the netdev's features field. I've made gso_type a conjunction. The idea is that you have a base type (e.g., SKB_GSO_TCPV4) that can be modified further to support new features. For example, if we add a hardware TSO type that supports ECN, they would declare NETIF_F_TSO | NETIF_F_TSO_ECN. All TSO packets with CWR set would have a gso_type of SKB_GSO_TCPV4 | SKB_GSO_TCPV4_ECN while all other TSO packets would be SKB_GSO_TCPV4. This means that only the CWR packets need to be emulated in software. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23[NET]: Prevent transmission after dev_deactivateHerbert Xu1-3/+3
The dev_deactivate function has bit-rotted since the introduction of lockless drivers. In particular, the spin_unlock_wait call at the end has no effect on the xmit routine of lockless drivers. With a little bit of work, we can make it much more useful by providing the guarantee that when it returns, no more calls to the xmit routine of the underlying driver will be made. The idea is simple. There are two entry points in to the xmit routine. The first comes from dev_queue_xmit. That one is easily stopped by using synchronize_rcu. This works because we set the qdisc to noop_qdisc before the synchronize_rcu call. That in turn causes all subsequent packets sent to dev_queue_xmit to be dropped. The synchronize_rcu call also ensures all outstanding calls leave their critical section. The other entry point is from qdisc_run. Since we now have a bit that indicates whether it's running, all we have to do is to wait until the bit is off. I've removed the loop to wait for __LINK_STATE_SCHED to clear. This is useless because netif_wake_queue can cause it to be set again. It is also harmless because we've disarmed qdisc_run. I've also removed the spin_unlock_wait on xmit_lock because its only purpose of making sure that all outstanding xmit_lock holders have exited is also given by dev_watchdog_down. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-23[NET]: Avoid allocating skb in skb_padHerbert Xu1-10/+26
First of all it is unnecessary to allocate a new skb in skb_pad since the existing one is not shared. More importantly, our hard_start_xmit interface does not allow a new skb to be allocated since that breaks requeueing. This patch uses pskb_expand_head to expand the existing skb and linearize it if needed. Actually, someone should sift through every instance of skb_pad on a non-linear skb as they do not fit the reasons why this was originally created. Incidentally, this fixes a minor bug when the skb is cloned (tcpdump, TCP, etc.). As it is skb_pad will simply write over a cloned skb. Because of the position of the write it is unlikely to cause problems but still it's best if we don't do it. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17[ETHTOOL]: Fix UFO typoHerbert Xu1-1/+2
The function ethtool_get_ufo was referring to ETHTOOL_GTSO instead of ETHTOOL_GUFO. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2006-06-17[NET]: Add NETIF_F_GEN_CSUM and NETIF_F_ALL_CSUMHerbert Xu2-8/+4
The current stack treats NETIF_F_HW_CSUM and NETIF_F_NO_CSUM identically so we test for them in quite a few places. For the sake of brevity, I'm adding the macro NETIF_F_GEN_CSUM for these two. We also test the disjunct of NETIF_F_IP_CSUM and the other two in various places, for that purpose I've added NETIF_F_ALL_CSUM. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>