diff options
| author | Rusty Russell <rusty@rustcorp.com.au> | 2009-11-17 14:27:27 -0800 | 
|---|---|---|
| committer | Thomas Gleixner <tglx@linutronix.de> | 2009-11-18 14:52:25 +0100 | 
| commit | 2ea6dec4a22a6f66f6633876212fd4d195cf8277 (patch) | |
| tree | f630c63a9e20fab5b31caa88368293a203103408 /kernel/smp.c | |
| parent | 72f279b256d520e321a850880d094bc0bcbf45d6 (diff) | |
| download | linux-2ea6dec4a22a6f66f6633876212fd4d195cf8277.tar.bz2 | |
generic-ipi: Add smp_call_function_any()
Andrew points out that acpi-cpufreq uses cpumask_any, when it really
would prefer to use the same CPU if possible (to avoid an IPI).  In
general, this seems a good idea to offer.
[ tglx: Documented selection preference and Inlined the UP case to
  	avoid the copy of smp_call_function_single() and the extra
  	EXPORT ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Zhao Yakui <yakui.zhao@intel.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'kernel/smp.c')
| -rw-r--r-- | kernel/smp.c | 45 | 
1 files changed, 45 insertions, 0 deletions
| diff --git a/kernel/smp.c b/kernel/smp.c index 8bd618f0364d..a8c76069cf50 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -319,6 +319,51 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,  }  EXPORT_SYMBOL(smp_call_function_single); +/* + * smp_call_function_any - Run a function on any of the given cpus + * @mask: The mask of cpus it can run on. + * @func: The function to run. This must be fast and non-blocking. + * @info: An arbitrary pointer to pass to the function. + * @wait: If true, wait until function has completed. + * + * Returns 0 on success, else a negative status code (if no cpus were online). + * Note that @wait will be implicitly turned on in case of allocation failures, + * since we fall back to on-stack allocation. + * + * Selection preference: + *	1) current cpu if in @mask + *	2) any cpu of current node if in @mask + *	3) any other online cpu in @mask + */ +int smp_call_function_any(const struct cpumask *mask, +			  void (*func)(void *info), void *info, int wait) +{ +	unsigned int cpu; +	const struct cpumask *nodemask; +	int ret; + +	/* Try for same CPU (cheapest) */ +	cpu = get_cpu(); +	if (cpumask_test_cpu(cpu, mask)) +		goto call; + +	/* Try for same node. */ +	nodemask = cpumask_of_node(cpu); +	for (cpu = cpumask_first_and(nodemask, mask); cpu < nr_cpu_ids; +	     cpu = cpumask_next_and(cpu, nodemask, mask)) { +		if (cpu_online(cpu)) +			goto call; +	} + +	/* Any online will do: smp_call_function_single handles nr_cpu_ids. */ +	cpu = cpumask_any_and(mask, cpu_online_mask); +call: +	ret = smp_call_function_single(cpu, func, info, wait); +	put_cpu(); +	return ret; +} +EXPORT_SYMBOL_GPL(smp_call_function_any); +  /**   * __smp_call_function_single(): Run a function on another CPU   * @cpu: The CPU to run on. |