summaryrefslogtreecommitdiffstats
path: root/include/net
diff options
context:
space:
mode:
authorBen Dooks <ben.dooks@codethink.co.uk>2014-06-03 12:21:13 +0100
committerDavid S. Miller <davem@davemloft.net>2014-06-03 19:28:42 -0700
commit530aa2d0d9d55ab2775d47621ddf4b5b15bc1110 (patch)
tree50e84f59bd34c9b4c3c22caf3d63bce76032166d /include/net
parente51fb152318ee6502a2d224771b0bbbbda046128 (diff)
downloadlinux-530aa2d0d9d55ab2775d47621ddf4b5b15bc1110.tar.bz2
sh_eth: use RNC mode for packet reception
The current behaviour of the sh_eth driver is not to use the RNC bit for the receive ring. This means that every packet recieved is not only generating an IRQ but it also stops the receive ring DMA as well until the driver re-enables it after unloading the packet. This means that a number of the following errors are generated due to the receive packet FIFO overflowing due to nowhere to put packets: net eth0: Receive FIFO Overflow Since feedback from Yoshihiro Shimoda shows that every supported LSI for this driver should have the bit enabled it seems the best way is to remove the RMCR default value from the per-system data and just write it when initialising the RMCR value. This is discussed in the message (http://www.spinics.net/lists/netdev/msg284912.html). I have tested the RMCR_RNC configuration with NFS root filesystem and the driver has not failed yet. There are further test reports from Sergei Shtylov and others for both the R8A7790 and R8A7791. There is also feedback fron Cao Minh Hiep[1] which reports the same issue in (http://comments.gmane.org/gmane.linux.network/316285) showing this fixes issues with losing UDP datagrams under iperf. Tested-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Acked-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net')
0 files changed, 0 insertions, 0 deletions