summaryrefslogtreecommitdiffstats
path: root/drivers/net/team
diff options
context:
space:
mode:
authorDebabrata Banerjee <dbanerje@akamai.com>2018-10-18 11:18:26 -0400
committerDavid S. Miller <davem@davemloft.net>2018-10-19 17:01:43 -0700
commitc9fbd71f73094311b31ee703a918e9e0df502cef (patch)
tree1f3d85463e83f066e5aa7e335d70382f0fc23dba /drivers/net/team
parent2e2d6f0342be7f73a34526077fa96f42f0e8c661 (diff)
downloadlinux-c9fbd71f73094311b31ee703a918e9e0df502cef.tar.bz2
netpoll: allow cleanup to be synchronous
This fixes a problem introduced by: commit 2cde6acd49da ("netpoll: Fix __netpoll_rcu_free so that it can hold the rtnl lock") When using netconsole on a bond, __netpoll_cleanup can asynchronously recurse multiple times, each __netpoll_free_async call can result in more __netpoll_free_async's. This means there is now a race between cleanup_work queues on multiple netpoll_info's on multiple devices and the configuration of a new netpoll. For example if a netconsole is set to enable 0, reconfigured, and enable 1 immediately, this netconsole will likely not work. Given the reason for __netpoll_free_async is it can be called when rtnl is not locked, if it is locked, we should be able to execute synchronously. It appears to be locked everywhere it's called from. Generalize the design pattern from the teaming driver for current callers of __netpoll_free_async. CC: Neil Horman <nhorman@tuxdriver.com> CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Debabrata Banerjee <dbanerje@akamai.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/net/team')
-rw-r--r--drivers/net/team/team.c5
1 files changed, 1 insertions, 4 deletions
diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
index d887016e54b6..db633ae9f784 100644
--- a/drivers/net/team/team.c
+++ b/drivers/net/team/team.c
@@ -1104,10 +1104,7 @@ static void team_port_disable_netpoll(struct team_port *port)
return;
port->np = NULL;
- /* Wait for transmitting packets to finish before freeing. */
- synchronize_rcu_bh();
- __netpoll_cleanup(np);
- kfree(np);
+ __netpoll_free(np);
}
#else
static int team_port_enable_netpoll(struct team_port *port)