diff options
author | Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> | 2014-03-11 02:10:36 +0530 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2014-03-20 13:43:46 +0100 |
commit | 180d86463257812dc17e5df912f3bddcc96abb00 (patch) | |
tree | 12a3ef30ebfa8bea3554b26e379403a8bcd495d3 /drivers/oprofile | |
parent | 07494d547e92bde6857522d2a92ff70896aecadb (diff) | |
download | linux-180d86463257812dc17e5df912f3bddcc96abb00.tar.bz2 |
oprofile, nmi-timer: Fix CPU hotplug callback registration
Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:
get_online_cpus();
for_each_online_cpu(cpu)
init_cpu(cpu);
register_cpu_notifier(&foobar_cpu_notifier);
put_online_cpus();
This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).
Instead, the correct and race-free way of performing the callback
registration is:
cpu_notifier_register_begin();
for_each_online_cpu(cpu)
init_cpu(cpu);
/* Note the use of the double underscored version of the API */
__register_cpu_notifier(&foobar_cpu_notifier);
cpu_notifier_register_done();
Fix the nmi-timer code in oprofile by using this latter form of callback
registration.
Cc: Robert Richter <rric@kernel.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Diffstat (limited to 'drivers/oprofile')
-rw-r--r-- | drivers/oprofile/nmi_timer_int.c | 23 |
1 files changed, 13 insertions, 10 deletions
diff --git a/drivers/oprofile/nmi_timer_int.c b/drivers/oprofile/nmi_timer_int.c index 76f1c9357f39..9559829fb234 100644 --- a/drivers/oprofile/nmi_timer_int.c +++ b/drivers/oprofile/nmi_timer_int.c @@ -108,8 +108,8 @@ static void nmi_timer_shutdown(void) struct perf_event *event; int cpu; - get_online_cpus(); - unregister_cpu_notifier(&nmi_timer_cpu_nb); + cpu_notifier_register_begin(); + __unregister_cpu_notifier(&nmi_timer_cpu_nb); for_each_possible_cpu(cpu) { event = per_cpu(nmi_timer_events, cpu); if (!event) @@ -119,7 +119,7 @@ static void nmi_timer_shutdown(void) perf_event_release_kernel(event); } - put_online_cpus(); + cpu_notifier_register_done(); } static int nmi_timer_setup(void) @@ -132,20 +132,23 @@ static int nmi_timer_setup(void) do_div(period, HZ); nmi_timer_attr.sample_period = period; - get_online_cpus(); - err = register_cpu_notifier(&nmi_timer_cpu_nb); + cpu_notifier_register_begin(); + err = __register_cpu_notifier(&nmi_timer_cpu_nb); if (err) goto out; + /* can't attach events to offline cpus: */ for_each_online_cpu(cpu) { err = nmi_timer_start_cpu(cpu); - if (err) - break; + if (err) { + cpu_notifier_register_done(); + nmi_timer_shutdown(); + return err; + } } - if (err) - nmi_timer_shutdown(); + out: - put_online_cpus(); + cpu_notifier_register_done(); return err; } |